00:00:00.000 Started by upstream project "autotest-per-patch" build number 126167 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23924 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.066 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.213 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.213 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/75/21875/22 # timeout=5 00:00:07.065 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.076 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.087 Checking out Revision 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 (FETCH_HEAD) 00:00:07.087 > git config core.sparsecheckout # timeout=10 00:00:07.098 > git read-tree -mu HEAD # timeout=10 00:00:07.115 > git checkout -f 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 # timeout=5 00:00:07.141 Commit message: "jenkins/jjb-config: Remove SPDK_TEST_RELEASE_BUILD from packaging job" 00:00:07.141 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.229 [Pipeline] Start of Pipeline 00:00:07.241 [Pipeline] library 00:00:07.242 Loading library shm_lib@master 00:00:07.243 Library shm_lib@master is cached. Copying from home. 00:00:07.262 [Pipeline] node 00:00:07.271 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.274 [Pipeline] { 00:00:07.313 [Pipeline] catchError 00:00:07.316 [Pipeline] { 00:00:07.335 [Pipeline] wrap 00:00:07.345 [Pipeline] { 00:00:07.352 [Pipeline] stage 00:00:07.353 [Pipeline] { (Prologue) 00:00:07.539 [Pipeline] sh 00:00:07.820 + logger -p user.info -t JENKINS-CI 00:00:07.839 [Pipeline] echo 00:00:07.841 Node: WFP8 00:00:07.847 [Pipeline] sh 00:00:08.146 [Pipeline] setCustomBuildProperty 00:00:08.160 [Pipeline] echo 00:00:08.161 Cleanup processes 00:00:08.166 [Pipeline] sh 00:00:08.446 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.446 311524 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.458 [Pipeline] sh 00:00:08.740 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.740 ++ grep -v 'sudo pgrep' 00:00:08.740 ++ awk '{print $1}' 00:00:08.740 + sudo kill -9 00:00:08.740 + true 00:00:08.755 [Pipeline] cleanWs 00:00:08.764 [WS-CLEANUP] Deleting project workspace... 00:00:08.764 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.770 [WS-CLEANUP] done 00:00:08.775 [Pipeline] setCustomBuildProperty 00:00:08.790 [Pipeline] sh 00:00:09.071 + sudo git config --global --replace-all safe.directory '*' 00:00:09.224 [Pipeline] httpRequest 00:00:09.259 [Pipeline] echo 00:00:09.261 Sorcerer 10.211.164.101 is alive 00:00:09.269 [Pipeline] httpRequest 00:00:09.272 HttpMethod: GET 00:00:09.273 URL: http://10.211.164.101/packages/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:09.274 Sending request to url: http://10.211.164.101/packages/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:09.287 Response Code: HTTP/1.1 200 OK 00:00:09.288 Success: Status code 200 is in the accepted range: 200,404 00:00:09.288 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:16.573 [Pipeline] sh 00:00:16.857 + tar --no-same-owner -xf jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:16.873 [Pipeline] httpRequest 00:00:16.903 [Pipeline] echo 00:00:16.904 Sorcerer 10.211.164.101 is alive 00:00:16.913 [Pipeline] httpRequest 00:00:16.917 HttpMethod: GET 00:00:16.918 URL: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:16.919 Sending request to url: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:16.933 Response Code: HTTP/1.1 200 OK 00:00:16.934 Success: Status code 200 is in the accepted range: 200,404 00:00:16.934 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:02:06.299 [Pipeline] sh 00:02:06.635 + tar --no-same-owner -xf spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:02:09.204 [Pipeline] sh 00:02:09.486 + git -C spdk log --oneline -n5 00:02:09.486 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:02:09.486 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:02:09.486 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:02:09.486 719d03c6a sock/uring: only register net impl if supported 00:02:09.486 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:02:09.497 [Pipeline] } 00:02:09.514 [Pipeline] // stage 00:02:09.524 [Pipeline] stage 00:02:09.527 [Pipeline] { (Prepare) 00:02:09.546 [Pipeline] writeFile 00:02:09.563 [Pipeline] sh 00:02:09.842 + logger -p user.info -t JENKINS-CI 00:02:09.852 [Pipeline] sh 00:02:10.131 + logger -p user.info -t JENKINS-CI 00:02:10.143 [Pipeline] sh 00:02:10.424 + cat autorun-spdk.conf 00:02:10.424 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.424 SPDK_TEST_NVMF=1 00:02:10.424 SPDK_TEST_NVME_CLI=1 00:02:10.424 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.424 SPDK_TEST_NVMF_NICS=e810 00:02:10.424 SPDK_TEST_VFIOUSER=1 00:02:10.424 SPDK_RUN_UBSAN=1 00:02:10.424 NET_TYPE=phy 00:02:10.431 RUN_NIGHTLY=0 00:02:10.436 [Pipeline] readFile 00:02:10.462 [Pipeline] withEnv 00:02:10.464 [Pipeline] { 00:02:10.477 [Pipeline] sh 00:02:10.759 + set -ex 00:02:10.760 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:10.760 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.760 ++ SPDK_TEST_NVMF=1 00:02:10.760 ++ SPDK_TEST_NVME_CLI=1 00:02:10.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.760 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.760 ++ SPDK_TEST_VFIOUSER=1 00:02:10.760 ++ SPDK_RUN_UBSAN=1 00:02:10.760 ++ NET_TYPE=phy 00:02:10.760 ++ RUN_NIGHTLY=0 00:02:10.760 + case $SPDK_TEST_NVMF_NICS in 00:02:10.760 + DRIVERS=ice 00:02:10.760 + [[ tcp == \r\d\m\a ]] 00:02:10.760 + [[ -n ice ]] 00:02:10.760 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:10.760 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:10.760 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:10.760 rmmod: ERROR: Module irdma is not currently loaded 00:02:10.760 rmmod: ERROR: Module i40iw is not currently loaded 00:02:10.760 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:10.760 + true 00:02:10.760 + for D in $DRIVERS 00:02:10.760 + sudo modprobe ice 00:02:10.760 + exit 0 00:02:10.769 [Pipeline] } 00:02:10.788 [Pipeline] // withEnv 00:02:10.793 [Pipeline] } 00:02:10.811 [Pipeline] // stage 00:02:10.821 [Pipeline] catchError 00:02:10.822 [Pipeline] { 00:02:10.836 [Pipeline] timeout 00:02:10.837 Timeout set to expire in 50 min 00:02:10.838 [Pipeline] { 00:02:10.852 [Pipeline] stage 00:02:10.854 [Pipeline] { (Tests) 00:02:10.870 [Pipeline] sh 00:02:11.160 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.160 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.160 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.160 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:11.160 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.161 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.161 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:11.161 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.161 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.161 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.161 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:11.161 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.161 + source /etc/os-release 00:02:11.161 ++ NAME='Fedora Linux' 00:02:11.161 ++ VERSION='38 (Cloud Edition)' 00:02:11.161 ++ ID=fedora 00:02:11.161 ++ VERSION_ID=38 00:02:11.161 ++ VERSION_CODENAME= 00:02:11.161 ++ PLATFORM_ID=platform:f38 00:02:11.161 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:11.161 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.161 ++ LOGO=fedora-logo-icon 00:02:11.161 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:11.161 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.161 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:11.161 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.161 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.161 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.161 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:11.161 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.161 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:11.161 ++ SUPPORT_END=2024-05-14 00:02:11.161 ++ VARIANT='Cloud Edition' 00:02:11.161 ++ VARIANT_ID=cloud 00:02:11.161 + uname -a 00:02:11.161 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:11.161 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:13.698 Hugepages 00:02:13.698 node hugesize free / total 00:02:13.698 node0 1048576kB 0 / 0 00:02:13.698 node0 2048kB 0 / 0 00:02:13.698 node1 1048576kB 0 / 0 00:02:13.698 node1 2048kB 0 / 0 00:02:13.698 00:02:13.698 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.698 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:13.698 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:13.698 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:13.698 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:13.698 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:13.698 + rm -f /tmp/spdk-ld-path 00:02:13.698 + source autorun-spdk.conf 00:02:13.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.698 ++ SPDK_TEST_NVMF=1 00:02:13.698 ++ SPDK_TEST_NVME_CLI=1 00:02:13.698 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.698 ++ SPDK_TEST_NVMF_NICS=e810 00:02:13.698 ++ SPDK_TEST_VFIOUSER=1 00:02:13.698 ++ SPDK_RUN_UBSAN=1 00:02:13.698 ++ NET_TYPE=phy 00:02:13.698 ++ RUN_NIGHTLY=0 00:02:13.698 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.698 + [[ -n '' ]] 00:02:13.698 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.698 + for M in /var/spdk/build-*-manifest.txt 00:02:13.698 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.698 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:13.698 + for M in /var/spdk/build-*-manifest.txt 00:02:13.698 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.698 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:13.698 ++ uname 00:02:13.698 + [[ Linux == \L\i\n\u\x ]] 00:02:13.698 + sudo dmesg -T 00:02:13.698 + sudo dmesg --clear 00:02:13.698 + dmesg_pid=312961 00:02:13.698 + [[ Fedora Linux == FreeBSD ]] 00:02:13.698 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.698 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.698 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.698 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.698 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.698 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.698 + sudo dmesg -Tw 00:02:13.698 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.698 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.698 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.698 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.698 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.698 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.698 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.698 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.698 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.698 Test configuration: 00:02:13.958 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.958 SPDK_TEST_NVMF=1 00:02:13.958 SPDK_TEST_NVME_CLI=1 00:02:13.958 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.958 SPDK_TEST_NVMF_NICS=e810 00:02:13.958 SPDK_TEST_VFIOUSER=1 00:02:13.958 SPDK_RUN_UBSAN=1 00:02:13.958 NET_TYPE=phy 00:02:13.958 RUN_NIGHTLY=0 11:12:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.958 11:12:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.958 11:12:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.958 11:12:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.958 11:12:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.958 11:12:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.958 11:12:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.958 11:12:57 -- paths/export.sh@5 -- $ export PATH 00:02:13.958 11:12:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.958 11:12:57 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.958 11:12:57 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:13.958 11:12:57 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721034777.XXXXXX 00:02:13.958 11:12:57 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721034777.IFkpD1 00:02:13.958 11:12:57 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:13.958 11:12:57 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:13.958 11:12:57 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:13.958 11:12:57 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:13.958 11:12:57 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.958 11:12:57 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:13.958 11:12:57 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:13.958 11:12:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.958 11:12:57 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:13.958 11:12:57 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:13.958 11:12:57 -- pm/common@17 -- $ local monitor 00:02:13.958 11:12:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.958 11:12:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.958 11:12:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.958 11:12:57 -- pm/common@21 -- $ date +%s 00:02:13.958 11:12:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.958 11:12:57 -- pm/common@21 -- $ date +%s 00:02:13.958 11:12:57 -- pm/common@25 -- $ sleep 1 00:02:13.958 11:12:57 -- pm/common@21 -- $ date +%s 00:02:13.958 11:12:57 -- pm/common@21 -- $ date +%s 00:02:13.958 11:12:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034777 00:02:13.958 11:12:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034777 00:02:13.958 11:12:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034777 00:02:13.958 11:12:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034777 00:02:13.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034777_collect-vmstat.pm.log 00:02:13.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034777_collect-cpu-load.pm.log 00:02:13.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034777_collect-cpu-temp.pm.log 00:02:13.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034777_collect-bmc-pm.bmc.pm.log 00:02:14.894 11:12:58 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:14.894 11:12:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.894 11:12:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.894 11:12:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.894 11:12:58 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.894 Mon Jul 15 09:12:58 AM UTC 2024 00:02:14.894 11:12:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.894 v24.09-pre-205-ge7cce062d 00:02:14.894 11:12:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:14.894 11:12:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.894 11:12:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.894 11:12:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:14.894 11:12:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:14.894 11:12:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.894 ************************************ 00:02:14.894 START TEST ubsan 00:02:14.894 ************************************ 00:02:14.894 11:12:58 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:14.894 using ubsan 00:02:14.894 00:02:14.894 real 0m0.000s 00:02:14.894 user 0m0.000s 00:02:14.894 sys 0m0.000s 00:02:14.894 11:12:58 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:14.894 11:12:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.894 ************************************ 00:02:14.894 END TEST ubsan 00:02:14.894 ************************************ 00:02:15.153 11:12:58 -- common/autotest_common.sh@1142 -- $ return 0 00:02:15.153 11:12:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:15.153 11:12:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:15.153 11:12:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:15.153 11:12:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:15.153 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:15.153 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:15.412 Using 'verbs' RDMA provider 00:02:28.565 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.799 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.799 Creating mk/config.mk...done. 00:02:40.799 Creating mk/cc.flags.mk...done. 00:02:40.799 Type 'make' to build. 00:02:40.799 11:13:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:40.799 11:13:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:40.799 11:13:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:40.799 11:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.799 ************************************ 00:02:40.799 START TEST make 00:02:40.799 ************************************ 00:02:40.799 11:13:23 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:40.799 make[1]: Nothing to be done for 'all'. 00:02:41.748 The Meson build system 00:02:41.748 Version: 1.3.1 00:02:41.748 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:41.748 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:41.748 Build type: native build 00:02:41.748 Project name: libvfio-user 00:02:41.748 Project version: 0.0.1 00:02:41.748 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:41.748 C linker for the host machine: cc ld.bfd 2.39-16 00:02:41.748 Host machine cpu family: x86_64 00:02:41.748 Host machine cpu: x86_64 00:02:41.748 Run-time dependency threads found: YES 00:02:41.748 Library dl found: YES 00:02:41.748 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:41.748 Run-time dependency json-c found: YES 0.17 00:02:41.748 Run-time dependency cmocka found: YES 1.1.7 00:02:41.748 Program pytest-3 found: NO 00:02:41.748 Program flake8 found: NO 00:02:41.748 Program misspell-fixer found: NO 00:02:41.748 Program restructuredtext-lint found: NO 00:02:41.748 Program valgrind found: YES (/usr/bin/valgrind) 00:02:41.748 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.748 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.748 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.748 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:41.748 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:41.748 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:41.748 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:41.748 Build targets in project: 8 00:02:41.748 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:41.748 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:41.748 00:02:41.748 libvfio-user 0.0.1 00:02:41.748 00:02:41.748 User defined options 00:02:41.748 buildtype : debug 00:02:41.748 default_library: shared 00:02:41.748 libdir : /usr/local/lib 00:02:41.748 00:02:41.748 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.314 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:42.314 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:42.314 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:42.315 [3/37] Compiling C object samples/null.p/null.c.o 00:02:42.315 [4/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:42.315 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:42.315 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:42.315 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:42.315 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:42.315 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:42.315 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:42.315 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:42.315 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:42.315 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:42.315 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:42.315 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:42.315 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:42.315 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:42.315 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:42.315 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:42.315 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:42.315 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:42.315 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:42.315 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:42.315 [24/37] Compiling C object samples/server.p/server.c.o 00:02:42.315 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:42.315 [26/37] Compiling C object samples/client.p/client.c.o 00:02:42.315 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:42.315 [28/37] Linking target samples/client 00:02:42.315 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:42.573 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:42.573 [31/37] Linking target test/unit_tests 00:02:42.573 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:42.573 [33/37] Linking target samples/null 00:02:42.573 [34/37] Linking target samples/server 00:02:42.573 [35/37] Linking target samples/lspci 00:02:42.573 [36/37] Linking target samples/gpio-pci-idio-16 00:02:42.573 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:42.573 INFO: autodetecting backend as ninja 00:02:42.573 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.573 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:43.140 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.140 ninja: no work to do. 00:02:48.420 The Meson build system 00:02:48.420 Version: 1.3.1 00:02:48.420 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:48.420 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:48.420 Build type: native build 00:02:48.420 Program cat found: YES (/usr/bin/cat) 00:02:48.420 Project name: DPDK 00:02:48.420 Project version: 24.03.0 00:02:48.420 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:48.420 C linker for the host machine: cc ld.bfd 2.39-16 00:02:48.420 Host machine cpu family: x86_64 00:02:48.420 Host machine cpu: x86_64 00:02:48.420 Message: ## Building in Developer Mode ## 00:02:48.420 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.420 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.420 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.420 Program python3 found: YES (/usr/bin/python3) 00:02:48.420 Program cat found: YES (/usr/bin/cat) 00:02:48.420 Compiler for C supports arguments -march=native: YES 00:02:48.420 Checking for size of "void *" : 8 00:02:48.420 Checking for size of "void *" : 8 (cached) 00:02:48.420 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:48.420 Library m found: YES 00:02:48.420 Library numa found: YES 00:02:48.420 Has header "numaif.h" : YES 00:02:48.420 Library fdt found: NO 00:02:48.420 Library execinfo found: NO 00:02:48.420 Has header "execinfo.h" : YES 00:02:48.420 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:48.420 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.420 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.420 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.420 Run-time dependency openssl found: YES 3.0.9 00:02:48.420 Run-time dependency libpcap found: YES 1.10.4 00:02:48.420 Has header "pcap.h" with dependency libpcap: YES 00:02:48.420 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.420 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.420 Compiler for C supports arguments -Wformat: YES 00:02:48.420 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.420 Compiler for C supports arguments -Wformat-security: NO 00:02:48.420 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.420 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.420 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.420 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.420 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.420 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.420 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.420 Compiler for C supports arguments -Wundef: YES 00:02:48.420 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.420 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.420 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.420 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.420 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.420 Program objdump found: YES (/usr/bin/objdump) 00:02:48.420 Compiler for C supports arguments -mavx512f: YES 00:02:48.420 Checking if "AVX512 checking" compiles: YES 00:02:48.420 Fetching value of define "__SSE4_2__" : 1 00:02:48.420 Fetching value of define "__AES__" : 1 00:02:48.420 Fetching value of define "__AVX__" : 1 00:02:48.421 Fetching value of define "__AVX2__" : 1 00:02:48.421 Fetching value of define "__AVX512BW__" : 1 00:02:48.421 Fetching value of define "__AVX512CD__" : 1 00:02:48.421 Fetching value of define "__AVX512DQ__" : 1 00:02:48.421 Fetching value of define "__AVX512F__" : 1 00:02:48.421 Fetching value of define "__AVX512VL__" : 1 00:02:48.421 Fetching value of define "__PCLMUL__" : 1 00:02:48.421 Fetching value of define "__RDRND__" : 1 00:02:48.421 Fetching value of define "__RDSEED__" : 1 00:02:48.421 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.421 Fetching value of define "__znver1__" : (undefined) 00:02:48.421 Fetching value of define "__znver2__" : (undefined) 00:02:48.421 Fetching value of define "__znver3__" : (undefined) 00:02:48.421 Fetching value of define "__znver4__" : (undefined) 00:02:48.421 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.421 Message: lib/log: Defining dependency "log" 00:02:48.421 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.421 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.421 Checking for function "getentropy" : NO 00:02:48.421 Message: lib/eal: Defining dependency "eal" 00:02:48.421 Message: lib/ring: Defining dependency "ring" 00:02:48.421 Message: lib/rcu: Defining dependency "rcu" 00:02:48.421 Message: lib/mempool: Defining dependency "mempool" 00:02:48.421 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.421 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.421 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.421 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.421 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.421 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.421 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:48.421 Compiler for C supports arguments -mpclmul: YES 00:02:48.421 Compiler for C supports arguments -maes: YES 00:02:48.421 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.421 Compiler for C supports arguments -mavx512bw: YES 00:02:48.421 Compiler for C supports arguments -mavx512dq: YES 00:02:48.421 Compiler for C supports arguments -mavx512vl: YES 00:02:48.421 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.421 Compiler for C supports arguments -mavx2: YES 00:02:48.421 Compiler for C supports arguments -mavx: YES 00:02:48.421 Message: lib/net: Defining dependency "net" 00:02:48.421 Message: lib/meter: Defining dependency "meter" 00:02:48.421 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.421 Message: lib/pci: Defining dependency "pci" 00:02:48.421 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.421 Message: lib/hash: Defining dependency "hash" 00:02:48.421 Message: lib/timer: Defining dependency "timer" 00:02:48.421 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.421 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.421 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.421 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.421 Message: lib/power: Defining dependency "power" 00:02:48.421 Message: lib/reorder: Defining dependency "reorder" 00:02:48.421 Message: lib/security: Defining dependency "security" 00:02:48.421 Has header "linux/userfaultfd.h" : YES 00:02:48.421 Has header "linux/vduse.h" : YES 00:02:48.421 Message: lib/vhost: Defining dependency "vhost" 00:02:48.421 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.421 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.421 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.421 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.421 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.421 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.421 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.421 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.421 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.421 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.421 Program doxygen found: YES (/usr/bin/doxygen) 00:02:48.421 Configuring doxy-api-html.conf using configuration 00:02:48.421 Configuring doxy-api-man.conf using configuration 00:02:48.421 Program mandb found: YES (/usr/bin/mandb) 00:02:48.421 Program sphinx-build found: NO 00:02:48.421 Configuring rte_build_config.h using configuration 00:02:48.421 Message: 00:02:48.421 ================= 00:02:48.421 Applications Enabled 00:02:48.421 ================= 00:02:48.421 00:02:48.421 apps: 00:02:48.421 00:02:48.421 00:02:48.421 Message: 00:02:48.421 ================= 00:02:48.421 Libraries Enabled 00:02:48.421 ================= 00:02:48.421 00:02:48.421 libs: 00:02:48.421 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.421 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.421 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.421 00:02:48.421 Message: 00:02:48.421 =============== 00:02:48.421 Drivers Enabled 00:02:48.421 =============== 00:02:48.421 00:02:48.421 common: 00:02:48.421 00:02:48.421 bus: 00:02:48.421 pci, vdev, 00:02:48.421 mempool: 00:02:48.421 ring, 00:02:48.421 dma: 00:02:48.421 00:02:48.421 net: 00:02:48.421 00:02:48.421 crypto: 00:02:48.421 00:02:48.421 compress: 00:02:48.421 00:02:48.421 vdpa: 00:02:48.421 00:02:48.421 00:02:48.421 Message: 00:02:48.421 ================= 00:02:48.421 Content Skipped 00:02:48.421 ================= 00:02:48.421 00:02:48.421 apps: 00:02:48.421 dumpcap: explicitly disabled via build config 00:02:48.421 graph: explicitly disabled via build config 00:02:48.421 pdump: explicitly disabled via build config 00:02:48.421 proc-info: explicitly disabled via build config 00:02:48.421 test-acl: explicitly disabled via build config 00:02:48.421 test-bbdev: explicitly disabled via build config 00:02:48.421 test-cmdline: explicitly disabled via build config 00:02:48.421 test-compress-perf: explicitly disabled via build config 00:02:48.421 test-crypto-perf: explicitly disabled via build config 00:02:48.421 test-dma-perf: explicitly disabled via build config 00:02:48.421 test-eventdev: explicitly disabled via build config 00:02:48.421 test-fib: explicitly disabled via build config 00:02:48.421 test-flow-perf: explicitly disabled via build config 00:02:48.421 test-gpudev: explicitly disabled via build config 00:02:48.421 test-mldev: explicitly disabled via build config 00:02:48.421 test-pipeline: explicitly disabled via build config 00:02:48.421 test-pmd: explicitly disabled via build config 00:02:48.421 test-regex: explicitly disabled via build config 00:02:48.421 test-sad: explicitly disabled via build config 00:02:48.421 test-security-perf: explicitly disabled via build config 00:02:48.421 00:02:48.421 libs: 00:02:48.421 argparse: explicitly disabled via build config 00:02:48.421 metrics: explicitly disabled via build config 00:02:48.421 acl: explicitly disabled via build config 00:02:48.421 bbdev: explicitly disabled via build config 00:02:48.421 bitratestats: explicitly disabled via build config 00:02:48.421 bpf: explicitly disabled via build config 00:02:48.421 cfgfile: explicitly disabled via build config 00:02:48.421 distributor: explicitly disabled via build config 00:02:48.421 efd: explicitly disabled via build config 00:02:48.421 eventdev: explicitly disabled via build config 00:02:48.421 dispatcher: explicitly disabled via build config 00:02:48.421 gpudev: explicitly disabled via build config 00:02:48.421 gro: explicitly disabled via build config 00:02:48.421 gso: explicitly disabled via build config 00:02:48.421 ip_frag: explicitly disabled via build config 00:02:48.421 jobstats: explicitly disabled via build config 00:02:48.421 latencystats: explicitly disabled via build config 00:02:48.421 lpm: explicitly disabled via build config 00:02:48.421 member: explicitly disabled via build config 00:02:48.421 pcapng: explicitly disabled via build config 00:02:48.421 rawdev: explicitly disabled via build config 00:02:48.421 regexdev: explicitly disabled via build config 00:02:48.421 mldev: explicitly disabled via build config 00:02:48.421 rib: explicitly disabled via build config 00:02:48.421 sched: explicitly disabled via build config 00:02:48.421 stack: explicitly disabled via build config 00:02:48.421 ipsec: explicitly disabled via build config 00:02:48.421 pdcp: explicitly disabled via build config 00:02:48.421 fib: explicitly disabled via build config 00:02:48.421 port: explicitly disabled via build config 00:02:48.421 pdump: explicitly disabled via build config 00:02:48.421 table: explicitly disabled via build config 00:02:48.421 pipeline: explicitly disabled via build config 00:02:48.421 graph: explicitly disabled via build config 00:02:48.421 node: explicitly disabled via build config 00:02:48.421 00:02:48.422 drivers: 00:02:48.422 common/cpt: not in enabled drivers build config 00:02:48.422 common/dpaax: not in enabled drivers build config 00:02:48.422 common/iavf: not in enabled drivers build config 00:02:48.422 common/idpf: not in enabled drivers build config 00:02:48.422 common/ionic: not in enabled drivers build config 00:02:48.422 common/mvep: not in enabled drivers build config 00:02:48.422 common/octeontx: not in enabled drivers build config 00:02:48.422 bus/auxiliary: not in enabled drivers build config 00:02:48.422 bus/cdx: not in enabled drivers build config 00:02:48.422 bus/dpaa: not in enabled drivers build config 00:02:48.422 bus/fslmc: not in enabled drivers build config 00:02:48.422 bus/ifpga: not in enabled drivers build config 00:02:48.422 bus/platform: not in enabled drivers build config 00:02:48.422 bus/uacce: not in enabled drivers build config 00:02:48.422 bus/vmbus: not in enabled drivers build config 00:02:48.422 common/cnxk: not in enabled drivers build config 00:02:48.422 common/mlx5: not in enabled drivers build config 00:02:48.422 common/nfp: not in enabled drivers build config 00:02:48.422 common/nitrox: not in enabled drivers build config 00:02:48.422 common/qat: not in enabled drivers build config 00:02:48.422 common/sfc_efx: not in enabled drivers build config 00:02:48.422 mempool/bucket: not in enabled drivers build config 00:02:48.422 mempool/cnxk: not in enabled drivers build config 00:02:48.422 mempool/dpaa: not in enabled drivers build config 00:02:48.422 mempool/dpaa2: not in enabled drivers build config 00:02:48.422 mempool/octeontx: not in enabled drivers build config 00:02:48.422 mempool/stack: not in enabled drivers build config 00:02:48.422 dma/cnxk: not in enabled drivers build config 00:02:48.422 dma/dpaa: not in enabled drivers build config 00:02:48.422 dma/dpaa2: not in enabled drivers build config 00:02:48.422 dma/hisilicon: not in enabled drivers build config 00:02:48.422 dma/idxd: not in enabled drivers build config 00:02:48.422 dma/ioat: not in enabled drivers build config 00:02:48.422 dma/skeleton: not in enabled drivers build config 00:02:48.422 net/af_packet: not in enabled drivers build config 00:02:48.422 net/af_xdp: not in enabled drivers build config 00:02:48.422 net/ark: not in enabled drivers build config 00:02:48.422 net/atlantic: not in enabled drivers build config 00:02:48.422 net/avp: not in enabled drivers build config 00:02:48.422 net/axgbe: not in enabled drivers build config 00:02:48.422 net/bnx2x: not in enabled drivers build config 00:02:48.422 net/bnxt: not in enabled drivers build config 00:02:48.422 net/bonding: not in enabled drivers build config 00:02:48.422 net/cnxk: not in enabled drivers build config 00:02:48.422 net/cpfl: not in enabled drivers build config 00:02:48.422 net/cxgbe: not in enabled drivers build config 00:02:48.422 net/dpaa: not in enabled drivers build config 00:02:48.422 net/dpaa2: not in enabled drivers build config 00:02:48.422 net/e1000: not in enabled drivers build config 00:02:48.422 net/ena: not in enabled drivers build config 00:02:48.422 net/enetc: not in enabled drivers build config 00:02:48.422 net/enetfec: not in enabled drivers build config 00:02:48.422 net/enic: not in enabled drivers build config 00:02:48.422 net/failsafe: not in enabled drivers build config 00:02:48.422 net/fm10k: not in enabled drivers build config 00:02:48.422 net/gve: not in enabled drivers build config 00:02:48.422 net/hinic: not in enabled drivers build config 00:02:48.422 net/hns3: not in enabled drivers build config 00:02:48.422 net/i40e: not in enabled drivers build config 00:02:48.422 net/iavf: not in enabled drivers build config 00:02:48.422 net/ice: not in enabled drivers build config 00:02:48.422 net/idpf: not in enabled drivers build config 00:02:48.422 net/igc: not in enabled drivers build config 00:02:48.422 net/ionic: not in enabled drivers build config 00:02:48.422 net/ipn3ke: not in enabled drivers build config 00:02:48.422 net/ixgbe: not in enabled drivers build config 00:02:48.422 net/mana: not in enabled drivers build config 00:02:48.422 net/memif: not in enabled drivers build config 00:02:48.422 net/mlx4: not in enabled drivers build config 00:02:48.422 net/mlx5: not in enabled drivers build config 00:02:48.422 net/mvneta: not in enabled drivers build config 00:02:48.422 net/mvpp2: not in enabled drivers build config 00:02:48.422 net/netvsc: not in enabled drivers build config 00:02:48.422 net/nfb: not in enabled drivers build config 00:02:48.422 net/nfp: not in enabled drivers build config 00:02:48.422 net/ngbe: not in enabled drivers build config 00:02:48.422 net/null: not in enabled drivers build config 00:02:48.422 net/octeontx: not in enabled drivers build config 00:02:48.422 net/octeon_ep: not in enabled drivers build config 00:02:48.422 net/pcap: not in enabled drivers build config 00:02:48.422 net/pfe: not in enabled drivers build config 00:02:48.422 net/qede: not in enabled drivers build config 00:02:48.422 net/ring: not in enabled drivers build config 00:02:48.422 net/sfc: not in enabled drivers build config 00:02:48.422 net/softnic: not in enabled drivers build config 00:02:48.422 net/tap: not in enabled drivers build config 00:02:48.422 net/thunderx: not in enabled drivers build config 00:02:48.422 net/txgbe: not in enabled drivers build config 00:02:48.422 net/vdev_netvsc: not in enabled drivers build config 00:02:48.422 net/vhost: not in enabled drivers build config 00:02:48.422 net/virtio: not in enabled drivers build config 00:02:48.422 net/vmxnet3: not in enabled drivers build config 00:02:48.422 raw/*: missing internal dependency, "rawdev" 00:02:48.422 crypto/armv8: not in enabled drivers build config 00:02:48.422 crypto/bcmfs: not in enabled drivers build config 00:02:48.422 crypto/caam_jr: not in enabled drivers build config 00:02:48.422 crypto/ccp: not in enabled drivers build config 00:02:48.422 crypto/cnxk: not in enabled drivers build config 00:02:48.422 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.422 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.422 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.422 crypto/mlx5: not in enabled drivers build config 00:02:48.422 crypto/mvsam: not in enabled drivers build config 00:02:48.422 crypto/nitrox: not in enabled drivers build config 00:02:48.422 crypto/null: not in enabled drivers build config 00:02:48.422 crypto/octeontx: not in enabled drivers build config 00:02:48.422 crypto/openssl: not in enabled drivers build config 00:02:48.422 crypto/scheduler: not in enabled drivers build config 00:02:48.422 crypto/uadk: not in enabled drivers build config 00:02:48.422 crypto/virtio: not in enabled drivers build config 00:02:48.422 compress/isal: not in enabled drivers build config 00:02:48.422 compress/mlx5: not in enabled drivers build config 00:02:48.422 compress/nitrox: not in enabled drivers build config 00:02:48.422 compress/octeontx: not in enabled drivers build config 00:02:48.422 compress/zlib: not in enabled drivers build config 00:02:48.422 regex/*: missing internal dependency, "regexdev" 00:02:48.422 ml/*: missing internal dependency, "mldev" 00:02:48.422 vdpa/ifc: not in enabled drivers build config 00:02:48.422 vdpa/mlx5: not in enabled drivers build config 00:02:48.422 vdpa/nfp: not in enabled drivers build config 00:02:48.422 vdpa/sfc: not in enabled drivers build config 00:02:48.422 event/*: missing internal dependency, "eventdev" 00:02:48.422 baseband/*: missing internal dependency, "bbdev" 00:02:48.422 gpu/*: missing internal dependency, "gpudev" 00:02:48.422 00:02:48.422 00:02:48.422 Build targets in project: 85 00:02:48.422 00:02:48.422 DPDK 24.03.0 00:02:48.422 00:02:48.422 User defined options 00:02:48.422 buildtype : debug 00:02:48.422 default_library : shared 00:02:48.422 libdir : lib 00:02:48.422 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:48.422 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.422 c_link_args : 00:02:48.422 cpu_instruction_set: native 00:02:48.422 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:48.422 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:48.422 enable_docs : false 00:02:48.422 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.422 enable_kmods : false 00:02:48.422 max_lcores : 128 00:02:48.422 tests : false 00:02:48.422 00:02:48.422 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.236 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:49.236 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.236 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.236 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.236 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:49.236 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.236 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.236 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.236 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:49.236 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:49.236 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.236 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.236 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.236 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.236 [14/268] Linking static target lib/librte_kvargs.a 00:02:49.236 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.236 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.236 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.236 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.236 [19/268] Linking static target lib/librte_log.a 00:02:49.236 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.236 [21/268] Linking static target lib/librte_pci.a 00:02:49.236 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.236 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.236 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.236 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.496 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.496 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.496 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.496 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.496 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.496 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.496 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.496 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.496 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.496 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.496 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.496 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.496 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.496 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.496 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.496 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.496 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:49.496 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.496 [44/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.496 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:49.496 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.496 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.496 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.496 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.496 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.496 [51/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.496 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.496 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.496 [54/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.496 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.496 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.496 [57/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.496 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.496 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.496 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.496 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.496 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.496 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.496 [64/268] Linking static target lib/librte_telemetry.a 00:02:49.496 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.496 [66/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.496 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.496 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.496 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:49.496 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.496 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.496 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.496 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.496 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.496 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.496 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.496 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.496 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.496 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.496 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.496 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.496 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.496 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.496 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.496 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.496 [86/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.496 [87/268] Linking static target lib/librte_meter.a 00:02:49.496 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.496 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:49.496 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.496 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.496 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.496 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.496 [94/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.496 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.496 [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.496 [97/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.496 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.496 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.757 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.757 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.757 [102/268] Linking static target lib/librte_ring.a 00:02:49.757 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.757 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.757 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.757 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.757 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:49.757 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.757 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.757 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.757 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.757 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.757 [113/268] Linking static target lib/librte_rcu.a 00:02:49.757 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.757 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.757 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.757 [117/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.757 [118/268] Linking static target lib/librte_net.a 00:02:49.757 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.757 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.757 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.757 [122/268] Linking static target lib/librte_mempool.a 00:02:49.757 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.757 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.757 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.757 [126/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.757 [127/268] Linking static target lib/librte_eal.a 00:02:49.757 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.757 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.757 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.757 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.757 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.757 [133/268] Linking static target lib/librte_cmdline.a 00:02:49.757 [134/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.757 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.757 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.757 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.757 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.757 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.757 [140/268] Linking target lib/librte_log.so.24.1 00:02:49.757 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.757 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.757 [143/268] Linking static target lib/librte_timer.a 00:02:49.757 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.757 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.018 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.018 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.018 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.018 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:50.018 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:50.018 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.018 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:50.018 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:50.018 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.018 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.018 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.018 [157/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.018 [158/268] Linking static target lib/librte_mbuf.a 00:02:50.018 [159/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.018 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.018 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.018 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.018 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.018 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.018 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.018 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.018 [167/268] Linking static target lib/librte_compressdev.a 00:02:50.018 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.018 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.018 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.018 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.018 [172/268] Linking static target lib/librte_reorder.a 00:02:50.018 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.018 [174/268] Linking target lib/librte_kvargs.so.24.1 00:02:50.018 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.018 [176/268] Linking target lib/librte_telemetry.so.24.1 00:02:50.018 [177/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.018 [178/268] Linking static target lib/librte_power.a 00:02:50.018 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.018 [180/268] Linking static target lib/librte_security.a 00:02:50.018 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.018 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.019 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.019 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.019 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.019 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.019 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.019 [188/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.019 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.019 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.019 [191/268] Linking static target lib/librte_dmadev.a 00:02:50.019 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.277 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.277 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.277 [195/268] Linking static target lib/librte_hash.a 00:02:50.277 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:50.277 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.277 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.277 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.277 [200/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.277 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.277 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.277 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.277 [204/268] Linking static target drivers/librte_mempool_ring.a 00:02:50.277 [205/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.277 [206/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.277 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.277 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.277 [209/268] Linking static target lib/librte_cryptodev.a 00:02:50.277 [210/268] Linking static target drivers/librte_bus_pci.a 00:02:50.277 [211/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.277 [212/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.277 [213/268] Linking static target drivers/librte_bus_vdev.a 00:02:50.536 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.536 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.536 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.794 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.794 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.794 [219/268] Linking static target lib/librte_ethdev.a 00:02:50.794 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.794 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.794 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.794 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.794 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.053 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.053 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.053 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.989 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.989 [229/268] Linking static target lib/librte_vhost.a 00:02:52.248 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.152 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.428 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.997 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.997 [234/268] Linking target lib/librte_eal.so.24.1 00:03:00.255 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.255 [236/268] Linking target lib/librte_meter.so.24.1 00:03:00.255 [237/268] Linking target lib/librte_ring.so.24.1 00:03:00.255 [238/268] Linking target lib/librte_timer.so.24.1 00:03:00.255 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.255 [240/268] Linking target lib/librte_pci.so.24.1 00:03:00.255 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:00.255 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.255 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.256 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.256 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.256 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.256 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:00.256 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:00.514 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.514 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.514 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.514 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.514 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:00.772 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.772 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:00.772 [256/268] Linking target lib/librte_net.so.24.1 00:03:00.772 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:00.772 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:00.772 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:00.772 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.031 [261/268] Linking target lib/librte_hash.so.24.1 00:03:01.031 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:01.031 [263/268] Linking target lib/librte_security.so.24.1 00:03:01.031 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:01.031 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:01.031 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:01.031 [267/268] Linking target lib/librte_power.so.24.1 00:03:01.031 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:01.031 INFO: autodetecting backend as ninja 00:03:01.031 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:01.968 CC lib/log/log.o 00:03:01.968 CC lib/log/log_flags.o 00:03:01.968 CC lib/log/log_deprecated.o 00:03:02.227 CC lib/ut/ut.o 00:03:02.227 CC lib/ut_mock/mock.o 00:03:02.227 LIB libspdk_log.a 00:03:02.227 LIB libspdk_ut_mock.a 00:03:02.227 LIB libspdk_ut.a 00:03:02.227 SO libspdk_log.so.7.0 00:03:02.227 SO libspdk_ut_mock.so.6.0 00:03:02.227 SO libspdk_ut.so.2.0 00:03:02.227 SYMLINK libspdk_log.so 00:03:02.227 SYMLINK libspdk_ut_mock.so 00:03:02.501 SYMLINK libspdk_ut.so 00:03:02.760 CC lib/util/base64.o 00:03:02.760 CC lib/util/bit_array.o 00:03:02.760 CC lib/dma/dma.o 00:03:02.760 CC lib/util/cpuset.o 00:03:02.760 CC lib/util/crc16.o 00:03:02.760 CC lib/util/crc32.o 00:03:02.760 CC lib/util/crc32c.o 00:03:02.760 CC lib/util/crc32_ieee.o 00:03:02.760 CXX lib/trace_parser/trace.o 00:03:02.760 CC lib/util/crc64.o 00:03:02.760 CC lib/util/dif.o 00:03:02.760 CC lib/ioat/ioat.o 00:03:02.760 CC lib/util/fd.o 00:03:02.760 CC lib/util/file.o 00:03:02.760 CC lib/util/hexlify.o 00:03:02.760 CC lib/util/iov.o 00:03:02.760 CC lib/util/math.o 00:03:02.760 CC lib/util/pipe.o 00:03:02.760 CC lib/util/strerror_tls.o 00:03:02.760 CC lib/util/string.o 00:03:02.760 CC lib/util/uuid.o 00:03:02.760 CC lib/util/fd_group.o 00:03:02.760 CC lib/util/xor.o 00:03:02.760 CC lib/util/zipf.o 00:03:02.760 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.760 CC lib/vfio_user/host/vfio_user.o 00:03:02.760 LIB libspdk_dma.a 00:03:03.020 SO libspdk_dma.so.4.0 00:03:03.020 LIB libspdk_ioat.a 00:03:03.020 SYMLINK libspdk_dma.so 00:03:03.020 SO libspdk_ioat.so.7.0 00:03:03.020 SYMLINK libspdk_ioat.so 00:03:03.020 LIB libspdk_vfio_user.a 00:03:03.020 SO libspdk_vfio_user.so.5.0 00:03:03.020 LIB libspdk_util.a 00:03:03.020 SYMLINK libspdk_vfio_user.so 00:03:03.279 SO libspdk_util.so.9.1 00:03:03.279 SYMLINK libspdk_util.so 00:03:03.279 LIB libspdk_trace_parser.a 00:03:03.279 SO libspdk_trace_parser.so.5.0 00:03:03.537 SYMLINK libspdk_trace_parser.so 00:03:03.537 CC lib/idxd/idxd.o 00:03:03.537 CC lib/idxd/idxd_user.o 00:03:03.537 CC lib/idxd/idxd_kernel.o 00:03:03.537 CC lib/json/json_parse.o 00:03:03.537 CC lib/json/json_util.o 00:03:03.537 CC lib/json/json_write.o 00:03:03.537 CC lib/rdma_utils/rdma_utils.o 00:03:03.537 CC lib/rdma_provider/common.o 00:03:03.537 CC lib/conf/conf.o 00:03:03.537 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.537 CC lib/env_dpdk/env.o 00:03:03.537 CC lib/vmd/vmd.o 00:03:03.537 CC lib/env_dpdk/memory.o 00:03:03.537 CC lib/vmd/led.o 00:03:03.537 CC lib/env_dpdk/pci.o 00:03:03.537 CC lib/env_dpdk/init.o 00:03:03.537 CC lib/env_dpdk/threads.o 00:03:03.537 CC lib/env_dpdk/pci_ioat.o 00:03:03.537 CC lib/env_dpdk/pci_virtio.o 00:03:03.537 CC lib/env_dpdk/pci_vmd.o 00:03:03.537 CC lib/env_dpdk/pci_idxd.o 00:03:03.537 CC lib/env_dpdk/pci_event.o 00:03:03.537 CC lib/env_dpdk/sigbus_handler.o 00:03:03.537 CC lib/env_dpdk/pci_dpdk.o 00:03:03.537 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.537 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.795 LIB libspdk_rdma_provider.a 00:03:03.795 LIB libspdk_conf.a 00:03:03.795 SO libspdk_rdma_provider.so.6.0 00:03:03.795 SO libspdk_conf.so.6.0 00:03:03.795 LIB libspdk_rdma_utils.a 00:03:03.795 SO libspdk_rdma_utils.so.1.0 00:03:03.795 LIB libspdk_json.a 00:03:03.795 SYMLINK libspdk_rdma_provider.so 00:03:03.795 SYMLINK libspdk_conf.so 00:03:03.795 SO libspdk_json.so.6.0 00:03:04.053 SYMLINK libspdk_rdma_utils.so 00:03:04.053 SYMLINK libspdk_json.so 00:03:04.053 LIB libspdk_idxd.a 00:03:04.053 SO libspdk_idxd.so.12.0 00:03:04.053 LIB libspdk_vmd.a 00:03:04.053 SYMLINK libspdk_idxd.so 00:03:04.053 SO libspdk_vmd.so.6.0 00:03:04.312 SYMLINK libspdk_vmd.so 00:03:04.312 CC lib/jsonrpc/jsonrpc_server.o 00:03:04.312 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.312 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.312 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.572 LIB libspdk_jsonrpc.a 00:03:04.572 SO libspdk_jsonrpc.so.6.0 00:03:04.572 SYMLINK libspdk_jsonrpc.so 00:03:04.572 LIB libspdk_env_dpdk.a 00:03:04.572 SO libspdk_env_dpdk.so.14.1 00:03:04.831 SYMLINK libspdk_env_dpdk.so 00:03:04.831 CC lib/rpc/rpc.o 00:03:05.090 LIB libspdk_rpc.a 00:03:05.090 SO libspdk_rpc.so.6.0 00:03:05.090 SYMLINK libspdk_rpc.so 00:03:05.350 CC lib/notify/notify.o 00:03:05.350 CC lib/notify/notify_rpc.o 00:03:05.350 CC lib/trace/trace.o 00:03:05.350 CC lib/trace/trace_flags.o 00:03:05.350 CC lib/trace/trace_rpc.o 00:03:05.609 CC lib/keyring/keyring.o 00:03:05.609 CC lib/keyring/keyring_rpc.o 00:03:05.609 LIB libspdk_notify.a 00:03:05.609 SO libspdk_notify.so.6.0 00:03:05.609 LIB libspdk_trace.a 00:03:05.609 LIB libspdk_keyring.a 00:03:05.609 SO libspdk_trace.so.10.0 00:03:05.609 SO libspdk_keyring.so.1.0 00:03:05.609 SYMLINK libspdk_notify.so 00:03:05.869 SYMLINK libspdk_keyring.so 00:03:05.869 SYMLINK libspdk_trace.so 00:03:06.129 CC lib/thread/thread.o 00:03:06.129 CC lib/thread/iobuf.o 00:03:06.129 CC lib/sock/sock.o 00:03:06.129 CC lib/sock/sock_rpc.o 00:03:06.388 LIB libspdk_sock.a 00:03:06.388 SO libspdk_sock.so.10.0 00:03:06.388 SYMLINK libspdk_sock.so 00:03:06.955 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.955 CC lib/nvme/nvme_ctrlr.o 00:03:06.955 CC lib/nvme/nvme_fabric.o 00:03:06.955 CC lib/nvme/nvme_ns_cmd.o 00:03:06.955 CC lib/nvme/nvme_ns.o 00:03:06.955 CC lib/nvme/nvme_pcie_common.o 00:03:06.955 CC lib/nvme/nvme_pcie.o 00:03:06.955 CC lib/nvme/nvme_qpair.o 00:03:06.955 CC lib/nvme/nvme.o 00:03:06.955 CC lib/nvme/nvme_quirks.o 00:03:06.955 CC lib/nvme/nvme_transport.o 00:03:06.955 CC lib/nvme/nvme_discovery.o 00:03:06.955 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.955 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.955 CC lib/nvme/nvme_tcp.o 00:03:06.955 CC lib/nvme/nvme_opal.o 00:03:06.955 CC lib/nvme/nvme_io_msg.o 00:03:06.955 CC lib/nvme/nvme_poll_group.o 00:03:06.955 CC lib/nvme/nvme_zns.o 00:03:06.955 CC lib/nvme/nvme_stubs.o 00:03:06.955 CC lib/nvme/nvme_auth.o 00:03:06.955 CC lib/nvme/nvme_cuse.o 00:03:06.955 CC lib/nvme/nvme_vfio_user.o 00:03:06.955 CC lib/nvme/nvme_rdma.o 00:03:07.214 LIB libspdk_thread.a 00:03:07.214 SO libspdk_thread.so.10.1 00:03:07.214 SYMLINK libspdk_thread.so 00:03:07.473 CC lib/accel/accel_rpc.o 00:03:07.473 CC lib/accel/accel.o 00:03:07.473 CC lib/accel/accel_sw.o 00:03:07.473 CC lib/vfu_tgt/tgt_endpoint.o 00:03:07.473 CC lib/virtio/virtio.o 00:03:07.473 CC lib/vfu_tgt/tgt_rpc.o 00:03:07.473 CC lib/virtio/virtio_vhost_user.o 00:03:07.473 CC lib/virtio/virtio_vfio_user.o 00:03:07.473 CC lib/virtio/virtio_pci.o 00:03:07.473 CC lib/init/json_config.o 00:03:07.473 CC lib/init/subsystem.o 00:03:07.473 CC lib/init/subsystem_rpc.o 00:03:07.473 CC lib/init/rpc.o 00:03:07.473 CC lib/blob/blobstore.o 00:03:07.473 CC lib/blob/request.o 00:03:07.473 CC lib/blob/zeroes.o 00:03:07.473 CC lib/blob/blob_bs_dev.o 00:03:07.732 LIB libspdk_init.a 00:03:07.732 SO libspdk_init.so.5.0 00:03:07.732 LIB libspdk_virtio.a 00:03:07.732 LIB libspdk_vfu_tgt.a 00:03:07.732 SO libspdk_vfu_tgt.so.3.0 00:03:07.732 SO libspdk_virtio.so.7.0 00:03:07.732 SYMLINK libspdk_init.so 00:03:07.991 SYMLINK libspdk_vfu_tgt.so 00:03:07.991 SYMLINK libspdk_virtio.so 00:03:07.991 CC lib/event/app.o 00:03:07.991 CC lib/event/reactor.o 00:03:07.991 CC lib/event/log_rpc.o 00:03:07.991 CC lib/event/app_rpc.o 00:03:07.991 CC lib/event/scheduler_static.o 00:03:08.252 LIB libspdk_accel.a 00:03:08.252 SO libspdk_accel.so.15.1 00:03:08.252 SYMLINK libspdk_accel.so 00:03:08.252 LIB libspdk_nvme.a 00:03:08.512 LIB libspdk_event.a 00:03:08.512 SO libspdk_event.so.14.0 00:03:08.512 SO libspdk_nvme.so.13.1 00:03:08.512 SYMLINK libspdk_event.so 00:03:08.512 CC lib/bdev/bdev.o 00:03:08.512 CC lib/bdev/bdev_rpc.o 00:03:08.512 CC lib/bdev/bdev_zone.o 00:03:08.512 CC lib/bdev/part.o 00:03:08.512 CC lib/bdev/scsi_nvme.o 00:03:08.772 SYMLINK libspdk_nvme.so 00:03:09.712 LIB libspdk_blob.a 00:03:09.712 SO libspdk_blob.so.11.0 00:03:09.712 SYMLINK libspdk_blob.so 00:03:09.971 CC lib/lvol/lvol.o 00:03:09.971 CC lib/blobfs/blobfs.o 00:03:09.971 CC lib/blobfs/tree.o 00:03:10.538 LIB libspdk_bdev.a 00:03:10.538 SO libspdk_bdev.so.15.1 00:03:10.538 SYMLINK libspdk_bdev.so 00:03:10.538 LIB libspdk_blobfs.a 00:03:10.538 SO libspdk_blobfs.so.10.0 00:03:10.538 LIB libspdk_lvol.a 00:03:10.797 SO libspdk_lvol.so.10.0 00:03:10.797 SYMLINK libspdk_blobfs.so 00:03:10.797 SYMLINK libspdk_lvol.so 00:03:10.797 CC lib/scsi/dev.o 00:03:10.797 CC lib/scsi/lun.o 00:03:10.797 CC lib/scsi/port.o 00:03:10.797 CC lib/scsi/scsi.o 00:03:10.797 CC lib/scsi/scsi_bdev.o 00:03:10.797 CC lib/nvmf/ctrlr.o 00:03:10.797 CC lib/scsi/scsi_pr.o 00:03:10.797 CC lib/nvmf/ctrlr_discovery.o 00:03:10.797 CC lib/scsi/scsi_rpc.o 00:03:10.797 CC lib/nvmf/ctrlr_bdev.o 00:03:10.797 CC lib/scsi/task.o 00:03:10.797 CC lib/nvmf/subsystem.o 00:03:10.797 CC lib/ftl/ftl_core.o 00:03:10.797 CC lib/nvmf/nvmf.o 00:03:10.797 CC lib/ftl/ftl_init.o 00:03:10.797 CC lib/nvmf/nvmf_rpc.o 00:03:10.797 CC lib/nbd/nbd.o 00:03:10.797 CC lib/ftl/ftl_layout.o 00:03:10.797 CC lib/nvmf/transport.o 00:03:10.797 CC lib/nbd/nbd_rpc.o 00:03:10.797 CC lib/ftl/ftl_debug.o 00:03:10.797 CC lib/ftl/ftl_io.o 00:03:10.797 CC lib/nvmf/tcp.o 00:03:10.797 CC lib/ftl/ftl_sb.o 00:03:10.797 CC lib/ublk/ublk.o 00:03:10.797 CC lib/nvmf/stubs.o 00:03:10.797 CC lib/nvmf/vfio_user.o 00:03:10.797 CC lib/nvmf/mdns_server.o 00:03:10.797 CC lib/ublk/ublk_rpc.o 00:03:10.797 CC lib/ftl/ftl_l2p.o 00:03:10.797 CC lib/ftl/ftl_l2p_flat.o 00:03:10.797 CC lib/nvmf/auth.o 00:03:10.797 CC lib/nvmf/rdma.o 00:03:10.797 CC lib/ftl/ftl_nv_cache.o 00:03:10.797 CC lib/ftl/ftl_band.o 00:03:10.797 CC lib/ftl/ftl_band_ops.o 00:03:10.797 CC lib/ftl/ftl_writer.o 00:03:10.797 CC lib/ftl/ftl_rq.o 00:03:10.797 CC lib/ftl/ftl_reloc.o 00:03:10.797 CC lib/ftl/ftl_l2p_cache.o 00:03:10.797 CC lib/ftl/ftl_p2l.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.797 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.797 CC lib/ftl/utils/ftl_conf.o 00:03:10.797 CC lib/ftl/utils/ftl_md.o 00:03:10.797 CC lib/ftl/utils/ftl_mempool.o 00:03:10.797 CC lib/ftl/utils/ftl_property.o 00:03:10.797 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.797 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.797 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.797 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.797 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.797 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.797 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.797 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.797 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.797 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.797 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.797 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.797 CC lib/ftl/base/ftl_base_dev.o 00:03:10.797 CC lib/ftl/ftl_trace.o 00:03:10.797 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.364 LIB libspdk_scsi.a 00:03:11.364 LIB libspdk_nbd.a 00:03:11.364 SO libspdk_scsi.so.9.0 00:03:11.364 SO libspdk_nbd.so.7.0 00:03:11.364 SYMLINK libspdk_scsi.so 00:03:11.364 SYMLINK libspdk_nbd.so 00:03:11.622 LIB libspdk_ublk.a 00:03:11.622 SO libspdk_ublk.so.3.0 00:03:11.622 SYMLINK libspdk_ublk.so 00:03:11.622 LIB libspdk_ftl.a 00:03:11.622 CC lib/vhost/vhost.o 00:03:11.622 CC lib/vhost/vhost_rpc.o 00:03:11.622 CC lib/vhost/vhost_blk.o 00:03:11.622 CC lib/vhost/vhost_scsi.o 00:03:11.622 CC lib/vhost/rte_vhost_user.o 00:03:11.622 CC lib/iscsi/conn.o 00:03:11.622 CC lib/iscsi/init_grp.o 00:03:11.622 CC lib/iscsi/iscsi.o 00:03:11.622 CC lib/iscsi/md5.o 00:03:11.622 CC lib/iscsi/param.o 00:03:11.622 CC lib/iscsi/portal_grp.o 00:03:11.622 CC lib/iscsi/tgt_node.o 00:03:11.622 CC lib/iscsi/iscsi_subsystem.o 00:03:11.622 CC lib/iscsi/iscsi_rpc.o 00:03:11.622 CC lib/iscsi/task.o 00:03:11.880 SO libspdk_ftl.so.9.0 00:03:12.140 SYMLINK libspdk_ftl.so 00:03:12.399 LIB libspdk_nvmf.a 00:03:12.658 LIB libspdk_vhost.a 00:03:12.658 SO libspdk_vhost.so.8.0 00:03:12.658 SO libspdk_nvmf.so.18.1 00:03:12.658 SYMLINK libspdk_vhost.so 00:03:12.658 LIB libspdk_iscsi.a 00:03:12.658 SYMLINK libspdk_nvmf.so 00:03:12.919 SO libspdk_iscsi.so.8.0 00:03:12.919 SYMLINK libspdk_iscsi.so 00:03:13.488 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.488 CC module/vfu_device/vfu_virtio.o 00:03:13.488 CC module/vfu_device/vfu_virtio_blk.o 00:03:13.488 CC module/vfu_device/vfu_virtio_rpc.o 00:03:13.488 CC module/vfu_device/vfu_virtio_scsi.o 00:03:13.488 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.488 CC module/accel/dsa/accel_dsa.o 00:03:13.488 LIB libspdk_env_dpdk_rpc.a 00:03:13.488 CC module/accel/iaa/accel_iaa.o 00:03:13.488 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.488 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.488 CC module/keyring/linux/keyring.o 00:03:13.488 CC module/keyring/linux/keyring_rpc.o 00:03:13.488 CC module/keyring/file/keyring.o 00:03:13.488 CC module/accel/error/accel_error.o 00:03:13.488 CC module/keyring/file/keyring_rpc.o 00:03:13.488 CC module/accel/error/accel_error_rpc.o 00:03:13.488 CC module/blob/bdev/blob_bdev.o 00:03:13.488 CC module/accel/ioat/accel_ioat.o 00:03:13.488 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.488 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.488 CC module/sock/posix/posix.o 00:03:13.488 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.747 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.747 LIB libspdk_scheduler_gscheduler.a 00:03:13.747 LIB libspdk_keyring_linux.a 00:03:13.747 LIB libspdk_keyring_file.a 00:03:13.747 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.747 LIB libspdk_accel_error.a 00:03:13.747 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.747 LIB libspdk_accel_iaa.a 00:03:13.747 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:13.747 LIB libspdk_scheduler_dynamic.a 00:03:13.747 SO libspdk_keyring_file.so.1.0 00:03:13.747 SO libspdk_keyring_linux.so.1.0 00:03:13.747 LIB libspdk_accel_ioat.a 00:03:13.747 LIB libspdk_accel_dsa.a 00:03:13.747 SO libspdk_accel_error.so.2.0 00:03:13.747 SO libspdk_accel_iaa.so.3.0 00:03:13.747 SO libspdk_scheduler_dynamic.so.4.0 00:03:13.747 SO libspdk_accel_ioat.so.6.0 00:03:13.747 LIB libspdk_blob_bdev.a 00:03:13.747 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.747 SO libspdk_accel_dsa.so.5.0 00:03:13.747 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.747 SYMLINK libspdk_keyring_linux.so 00:03:13.747 SYMLINK libspdk_keyring_file.so 00:03:13.747 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.747 SYMLINK libspdk_accel_error.so 00:03:13.747 SO libspdk_blob_bdev.so.11.0 00:03:13.747 SYMLINK libspdk_accel_iaa.so 00:03:13.747 SYMLINK libspdk_accel_ioat.so 00:03:14.007 SYMLINK libspdk_accel_dsa.so 00:03:14.007 SYMLINK libspdk_blob_bdev.so 00:03:14.007 LIB libspdk_vfu_device.a 00:03:14.007 SO libspdk_vfu_device.so.3.0 00:03:14.007 SYMLINK libspdk_vfu_device.so 00:03:14.267 LIB libspdk_sock_posix.a 00:03:14.267 SO libspdk_sock_posix.so.6.0 00:03:14.267 SYMLINK libspdk_sock_posix.so 00:03:14.267 CC module/bdev/error/vbdev_error.o 00:03:14.267 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.267 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.267 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.267 CC module/bdev/delay/vbdev_delay.o 00:03:14.267 CC module/bdev/gpt/gpt.o 00:03:14.267 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.267 CC module/bdev/malloc/bdev_malloc.o 00:03:14.267 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.267 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.267 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.267 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.267 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.267 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.267 CC module/bdev/raid/bdev_raid.o 00:03:14.267 CC module/bdev/split/vbdev_split.o 00:03:14.267 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.267 CC module/bdev/ftl/bdev_ftl.o 00:03:14.267 CC module/bdev/null/bdev_null.o 00:03:14.267 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.267 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.267 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.267 CC module/bdev/null/bdev_null_rpc.o 00:03:14.267 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.267 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:14.267 CC module/bdev/raid/raid0.o 00:03:14.267 CC module/bdev/raid/raid1.o 00:03:14.267 CC module/bdev/raid/concat.o 00:03:14.267 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.267 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.267 CC module/bdev/nvme/bdev_nvme.o 00:03:14.267 CC module/bdev/nvme/nvme_rpc.o 00:03:14.267 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.267 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.267 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.267 CC module/bdev/nvme/vbdev_opal.o 00:03:14.267 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.267 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.267 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.267 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.267 CC module/bdev/aio/bdev_aio.o 00:03:14.267 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.526 LIB libspdk_blobfs_bdev.a 00:03:14.526 SO libspdk_blobfs_bdev.so.6.0 00:03:14.526 LIB libspdk_bdev_error.a 00:03:14.526 LIB libspdk_bdev_split.a 00:03:14.839 SO libspdk_bdev_split.so.6.0 00:03:14.839 LIB libspdk_bdev_null.a 00:03:14.839 SO libspdk_bdev_error.so.6.0 00:03:14.839 LIB libspdk_bdev_passthru.a 00:03:14.839 LIB libspdk_bdev_gpt.a 00:03:14.839 SO libspdk_bdev_null.so.6.0 00:03:14.839 SYMLINK libspdk_blobfs_bdev.so 00:03:14.839 SO libspdk_bdev_passthru.so.6.0 00:03:14.839 SO libspdk_bdev_gpt.so.6.0 00:03:14.839 LIB libspdk_bdev_ftl.a 00:03:14.839 SYMLINK libspdk_bdev_error.so 00:03:14.839 LIB libspdk_bdev_delay.a 00:03:14.839 SYMLINK libspdk_bdev_split.so 00:03:14.839 LIB libspdk_bdev_malloc.a 00:03:14.839 LIB libspdk_bdev_aio.a 00:03:14.839 LIB libspdk_bdev_iscsi.a 00:03:14.839 SO libspdk_bdev_ftl.so.6.0 00:03:14.839 LIB libspdk_bdev_zone_block.a 00:03:14.839 SO libspdk_bdev_delay.so.6.0 00:03:14.839 SYMLINK libspdk_bdev_gpt.so 00:03:14.839 SYMLINK libspdk_bdev_null.so 00:03:14.839 SO libspdk_bdev_aio.so.6.0 00:03:14.839 SYMLINK libspdk_bdev_passthru.so 00:03:14.839 SO libspdk_bdev_malloc.so.6.0 00:03:14.839 SO libspdk_bdev_iscsi.so.6.0 00:03:14.839 SO libspdk_bdev_zone_block.so.6.0 00:03:14.839 SYMLINK libspdk_bdev_delay.so 00:03:14.839 SYMLINK libspdk_bdev_ftl.so 00:03:14.839 SYMLINK libspdk_bdev_aio.so 00:03:14.839 SYMLINK libspdk_bdev_malloc.so 00:03:14.839 SYMLINK libspdk_bdev_iscsi.so 00:03:14.839 LIB libspdk_bdev_lvol.a 00:03:14.839 SYMLINK libspdk_bdev_zone_block.so 00:03:14.839 LIB libspdk_bdev_virtio.a 00:03:14.839 SO libspdk_bdev_lvol.so.6.0 00:03:14.839 SO libspdk_bdev_virtio.so.6.0 00:03:15.098 SYMLINK libspdk_bdev_lvol.so 00:03:15.098 SYMLINK libspdk_bdev_virtio.so 00:03:15.098 LIB libspdk_bdev_raid.a 00:03:15.098 SO libspdk_bdev_raid.so.6.0 00:03:15.357 SYMLINK libspdk_bdev_raid.so 00:03:15.923 LIB libspdk_bdev_nvme.a 00:03:15.923 SO libspdk_bdev_nvme.so.7.0 00:03:16.182 SYMLINK libspdk_bdev_nvme.so 00:03:16.749 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.749 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.749 CC module/event/subsystems/vmd/vmd.o 00:03:16.749 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.749 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.749 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.749 CC module/event/subsystems/sock/sock.o 00:03:16.749 CC module/event/subsystems/keyring/keyring.o 00:03:16.749 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:17.007 LIB libspdk_event_vhost_blk.a 00:03:17.007 LIB libspdk_event_keyring.a 00:03:17.007 LIB libspdk_event_vmd.a 00:03:17.007 LIB libspdk_event_iobuf.a 00:03:17.007 LIB libspdk_event_scheduler.a 00:03:17.007 LIB libspdk_event_sock.a 00:03:17.007 LIB libspdk_event_vfu_tgt.a 00:03:17.007 SO libspdk_event_vhost_blk.so.3.0 00:03:17.007 SO libspdk_event_keyring.so.1.0 00:03:17.007 SO libspdk_event_scheduler.so.4.0 00:03:17.007 SO libspdk_event_vmd.so.6.0 00:03:17.007 SO libspdk_event_iobuf.so.3.0 00:03:17.007 SO libspdk_event_sock.so.5.0 00:03:17.007 SO libspdk_event_vfu_tgt.so.3.0 00:03:17.007 SYMLINK libspdk_event_keyring.so 00:03:17.007 SYMLINK libspdk_event_vhost_blk.so 00:03:17.007 SYMLINK libspdk_event_scheduler.so 00:03:17.007 SYMLINK libspdk_event_sock.so 00:03:17.007 SYMLINK libspdk_event_vmd.so 00:03:17.007 SYMLINK libspdk_event_iobuf.so 00:03:17.007 SYMLINK libspdk_event_vfu_tgt.so 00:03:17.266 CC module/event/subsystems/accel/accel.o 00:03:17.525 LIB libspdk_event_accel.a 00:03:17.525 SO libspdk_event_accel.so.6.0 00:03:17.525 SYMLINK libspdk_event_accel.so 00:03:17.784 CC module/event/subsystems/bdev/bdev.o 00:03:18.043 LIB libspdk_event_bdev.a 00:03:18.043 SO libspdk_event_bdev.so.6.0 00:03:18.043 SYMLINK libspdk_event_bdev.so 00:03:18.300 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.300 CC module/event/subsystems/scsi/scsi.o 00:03:18.300 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.300 CC module/event/subsystems/nbd/nbd.o 00:03:18.300 CC module/event/subsystems/ublk/ublk.o 00:03:18.558 LIB libspdk_event_scsi.a 00:03:18.558 LIB libspdk_event_nbd.a 00:03:18.558 LIB libspdk_event_ublk.a 00:03:18.558 SO libspdk_event_scsi.so.6.0 00:03:18.558 SO libspdk_event_nbd.so.6.0 00:03:18.558 SO libspdk_event_ublk.so.3.0 00:03:18.558 LIB libspdk_event_nvmf.a 00:03:18.558 SYMLINK libspdk_event_scsi.so 00:03:18.558 SYMLINK libspdk_event_nbd.so 00:03:18.558 SO libspdk_event_nvmf.so.6.0 00:03:18.558 SYMLINK libspdk_event_ublk.so 00:03:18.817 SYMLINK libspdk_event_nvmf.so 00:03:18.817 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.817 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.077 LIB libspdk_event_vhost_scsi.a 00:03:19.077 LIB libspdk_event_iscsi.a 00:03:19.077 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.077 SO libspdk_event_iscsi.so.6.0 00:03:19.077 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.077 SYMLINK libspdk_event_iscsi.so 00:03:19.336 SO libspdk.so.6.0 00:03:19.336 SYMLINK libspdk.so 00:03:19.594 CC app/trace_record/trace_record.o 00:03:19.594 CXX app/trace/trace.o 00:03:19.594 CC app/spdk_nvme_perf/perf.o 00:03:19.594 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.594 CC app/spdk_top/spdk_top.o 00:03:19.594 CC test/rpc_client/rpc_client_test.o 00:03:19.594 TEST_HEADER include/spdk/accel.h 00:03:19.594 TEST_HEADER include/spdk/accel_module.h 00:03:19.594 TEST_HEADER include/spdk/assert.h 00:03:19.594 CC app/spdk_nvme_identify/identify.o 00:03:19.594 TEST_HEADER include/spdk/barrier.h 00:03:19.594 TEST_HEADER include/spdk/base64.h 00:03:19.594 TEST_HEADER include/spdk/bdev.h 00:03:19.594 TEST_HEADER include/spdk/bdev_module.h 00:03:19.594 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.594 TEST_HEADER include/spdk/bit_array.h 00:03:19.594 CC app/spdk_lspci/spdk_lspci.o 00:03:19.594 TEST_HEADER include/spdk/bit_pool.h 00:03:19.594 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.594 TEST_HEADER include/spdk/blobfs.h 00:03:19.594 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.594 TEST_HEADER include/spdk/blob.h 00:03:19.594 TEST_HEADER include/spdk/conf.h 00:03:19.594 TEST_HEADER include/spdk/config.h 00:03:19.594 TEST_HEADER include/spdk/cpuset.h 00:03:19.594 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.594 TEST_HEADER include/spdk/crc16.h 00:03:19.594 TEST_HEADER include/spdk/crc64.h 00:03:19.594 TEST_HEADER include/spdk/crc32.h 00:03:19.594 TEST_HEADER include/spdk/dma.h 00:03:19.594 TEST_HEADER include/spdk/endian.h 00:03:19.594 TEST_HEADER include/spdk/dif.h 00:03:19.594 TEST_HEADER include/spdk/env.h 00:03:19.594 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.594 TEST_HEADER include/spdk/fd_group.h 00:03:19.594 TEST_HEADER include/spdk/event.h 00:03:19.594 TEST_HEADER include/spdk/fd.h 00:03:19.594 TEST_HEADER include/spdk/ftl.h 00:03:19.594 TEST_HEADER include/spdk/file.h 00:03:19.594 TEST_HEADER include/spdk/histogram_data.h 00:03:19.594 TEST_HEADER include/spdk/hexlify.h 00:03:19.594 TEST_HEADER include/spdk/idxd.h 00:03:19.594 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.594 TEST_HEADER include/spdk/ioat.h 00:03:19.594 TEST_HEADER include/spdk/init.h 00:03:19.594 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.594 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.594 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.594 TEST_HEADER include/spdk/json.h 00:03:19.594 CC app/nvmf_tgt/nvmf_main.o 00:03:19.594 TEST_HEADER include/spdk/keyring.h 00:03:19.594 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.594 TEST_HEADER include/spdk/keyring_module.h 00:03:19.594 TEST_HEADER include/spdk/likely.h 00:03:19.594 TEST_HEADER include/spdk/log.h 00:03:19.594 TEST_HEADER include/spdk/lvol.h 00:03:19.594 TEST_HEADER include/spdk/mmio.h 00:03:19.594 TEST_HEADER include/spdk/nbd.h 00:03:19.594 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.594 TEST_HEADER include/spdk/memory.h 00:03:19.853 TEST_HEADER include/spdk/notify.h 00:03:19.853 TEST_HEADER include/spdk/nvme.h 00:03:19.853 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.853 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.853 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.853 CC app/spdk_dd/spdk_dd.o 00:03:19.853 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.853 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.853 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.853 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.853 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.853 TEST_HEADER include/spdk/nvmf.h 00:03:19.853 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.853 TEST_HEADER include/spdk/opal.h 00:03:19.853 TEST_HEADER include/spdk/pci_ids.h 00:03:19.853 TEST_HEADER include/spdk/opal_spec.h 00:03:19.853 TEST_HEADER include/spdk/pipe.h 00:03:19.853 TEST_HEADER include/spdk/queue.h 00:03:19.853 TEST_HEADER include/spdk/reduce.h 00:03:19.853 TEST_HEADER include/spdk/rpc.h 00:03:19.853 TEST_HEADER include/spdk/scsi.h 00:03:19.853 TEST_HEADER include/spdk/scheduler.h 00:03:19.853 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.853 TEST_HEADER include/spdk/sock.h 00:03:19.853 TEST_HEADER include/spdk/stdinc.h 00:03:19.853 TEST_HEADER include/spdk/string.h 00:03:19.853 TEST_HEADER include/spdk/thread.h 00:03:19.853 TEST_HEADER include/spdk/trace_parser.h 00:03:19.853 TEST_HEADER include/spdk/trace.h 00:03:19.853 TEST_HEADER include/spdk/tree.h 00:03:19.853 TEST_HEADER include/spdk/util.h 00:03:19.853 TEST_HEADER include/spdk/ublk.h 00:03:19.853 TEST_HEADER include/spdk/version.h 00:03:19.853 TEST_HEADER include/spdk/uuid.h 00:03:19.854 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.854 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.854 TEST_HEADER include/spdk/vhost.h 00:03:19.854 TEST_HEADER include/spdk/vmd.h 00:03:19.854 TEST_HEADER include/spdk/xor.h 00:03:19.854 TEST_HEADER include/spdk/zipf.h 00:03:19.854 CXX test/cpp_headers/accel_module.o 00:03:19.854 CXX test/cpp_headers/accel.o 00:03:19.854 CXX test/cpp_headers/assert.o 00:03:19.854 CC app/spdk_tgt/spdk_tgt.o 00:03:19.854 CXX test/cpp_headers/barrier.o 00:03:19.854 CXX test/cpp_headers/base64.o 00:03:19.854 CXX test/cpp_headers/bdev.o 00:03:19.854 CXX test/cpp_headers/bdev_zone.o 00:03:19.854 CXX test/cpp_headers/bdev_module.o 00:03:19.854 CXX test/cpp_headers/bit_pool.o 00:03:19.854 CXX test/cpp_headers/bit_array.o 00:03:19.854 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.854 CXX test/cpp_headers/blob.o 00:03:19.854 CXX test/cpp_headers/blobfs.o 00:03:19.854 CXX test/cpp_headers/blob_bdev.o 00:03:19.854 CXX test/cpp_headers/conf.o 00:03:19.854 CXX test/cpp_headers/config.o 00:03:19.854 CXX test/cpp_headers/crc32.o 00:03:19.854 CXX test/cpp_headers/crc64.o 00:03:19.854 CXX test/cpp_headers/cpuset.o 00:03:19.854 CXX test/cpp_headers/crc16.o 00:03:19.854 CXX test/cpp_headers/dif.o 00:03:19.854 CXX test/cpp_headers/endian.o 00:03:19.854 CXX test/cpp_headers/dma.o 00:03:19.854 CXX test/cpp_headers/env.o 00:03:19.854 CXX test/cpp_headers/event.o 00:03:19.854 CXX test/cpp_headers/env_dpdk.o 00:03:19.854 CXX test/cpp_headers/fd.o 00:03:19.854 CXX test/cpp_headers/file.o 00:03:19.854 CXX test/cpp_headers/gpt_spec.o 00:03:19.854 CXX test/cpp_headers/fd_group.o 00:03:19.854 CXX test/cpp_headers/ftl.o 00:03:19.854 CXX test/cpp_headers/idxd.o 00:03:19.854 CXX test/cpp_headers/hexlify.o 00:03:19.854 CXX test/cpp_headers/histogram_data.o 00:03:19.854 CXX test/cpp_headers/idxd_spec.o 00:03:19.854 CXX test/cpp_headers/init.o 00:03:19.854 CXX test/cpp_headers/ioat.o 00:03:19.854 CXX test/cpp_headers/ioat_spec.o 00:03:19.854 CXX test/cpp_headers/json.o 00:03:19.854 CXX test/cpp_headers/iscsi_spec.o 00:03:19.854 CXX test/cpp_headers/jsonrpc.o 00:03:19.854 CXX test/cpp_headers/keyring_module.o 00:03:19.854 CXX test/cpp_headers/keyring.o 00:03:19.854 CXX test/cpp_headers/log.o 00:03:19.854 CXX test/cpp_headers/likely.o 00:03:19.854 CXX test/cpp_headers/memory.o 00:03:19.854 CXX test/cpp_headers/mmio.o 00:03:19.854 CXX test/cpp_headers/lvol.o 00:03:19.854 CXX test/cpp_headers/nbd.o 00:03:19.854 CXX test/cpp_headers/notify.o 00:03:19.854 CXX test/cpp_headers/nvme.o 00:03:19.854 CXX test/cpp_headers/nvme_intel.o 00:03:19.854 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.854 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.854 CXX test/cpp_headers/nvme_spec.o 00:03:19.854 CXX test/cpp_headers/nvme_zns.o 00:03:19.854 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.854 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.854 CXX test/cpp_headers/nvmf_spec.o 00:03:19.854 CXX test/cpp_headers/nvmf.o 00:03:19.854 CXX test/cpp_headers/nvmf_transport.o 00:03:19.854 CXX test/cpp_headers/pci_ids.o 00:03:19.854 CXX test/cpp_headers/opal.o 00:03:19.854 CXX test/cpp_headers/opal_spec.o 00:03:19.854 CXX test/cpp_headers/pipe.o 00:03:19.854 CXX test/cpp_headers/queue.o 00:03:19.854 CC test/app/stub/stub.o 00:03:19.854 CC examples/util/zipf/zipf.o 00:03:19.854 CC test/app/histogram_perf/histogram_perf.o 00:03:19.854 CXX test/cpp_headers/reduce.o 00:03:19.854 CC test/env/vtophys/vtophys.o 00:03:19.854 CC examples/ioat/verify/verify.o 00:03:19.854 CC test/app/jsoncat/jsoncat.o 00:03:19.854 CC test/dma/test_dma/test_dma.o 00:03:19.854 CC test/thread/poller_perf/poller_perf.o 00:03:19.854 CC app/fio/nvme/fio_plugin.o 00:03:19.854 CC test/env/memory/memory_ut.o 00:03:19.854 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.854 CC test/env/pci/pci_ut.o 00:03:19.854 CC examples/ioat/perf/perf.o 00:03:19.854 CC app/fio/bdev/fio_plugin.o 00:03:20.129 LINK spdk_lspci 00:03:20.129 CC test/app/bdev_svc/bdev_svc.o 00:03:20.129 LINK interrupt_tgt 00:03:20.129 LINK nvmf_tgt 00:03:20.129 LINK rpc_client_test 00:03:20.395 LINK spdk_nvme_discover 00:03:20.395 LINK spdk_trace_record 00:03:20.395 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.395 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.395 LINK jsoncat 00:03:20.395 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.395 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.395 LINK stub 00:03:20.395 CXX test/cpp_headers/rpc.o 00:03:20.395 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.395 CXX test/cpp_headers/scheduler.o 00:03:20.395 CXX test/cpp_headers/scsi.o 00:03:20.395 CXX test/cpp_headers/scsi_spec.o 00:03:20.395 CXX test/cpp_headers/sock.o 00:03:20.395 CXX test/cpp_headers/stdinc.o 00:03:20.395 CXX test/cpp_headers/string.o 00:03:20.395 CXX test/cpp_headers/thread.o 00:03:20.395 CXX test/cpp_headers/trace.o 00:03:20.395 CXX test/cpp_headers/tree.o 00:03:20.395 CXX test/cpp_headers/trace_parser.o 00:03:20.395 CXX test/cpp_headers/ublk.o 00:03:20.395 CXX test/cpp_headers/util.o 00:03:20.395 CXX test/cpp_headers/uuid.o 00:03:20.395 CXX test/cpp_headers/version.o 00:03:20.395 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.395 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.395 LINK spdk_tgt 00:03:20.395 CXX test/cpp_headers/vhost.o 00:03:20.395 CXX test/cpp_headers/vmd.o 00:03:20.395 CXX test/cpp_headers/xor.o 00:03:20.395 CXX test/cpp_headers/zipf.o 00:03:20.395 LINK vtophys 00:03:20.395 LINK verify 00:03:20.395 LINK iscsi_tgt 00:03:20.395 LINK histogram_perf 00:03:20.395 LINK zipf 00:03:20.395 LINK poller_perf 00:03:20.395 LINK bdev_svc 00:03:20.395 LINK env_dpdk_post_init 00:03:20.653 LINK spdk_trace 00:03:20.653 LINK ioat_perf 00:03:20.653 LINK test_dma 00:03:20.653 LINK spdk_dd 00:03:20.653 LINK pci_ut 00:03:20.910 LINK nvme_fuzz 00:03:20.910 LINK spdk_nvme 00:03:20.910 LINK spdk_bdev 00:03:20.910 LINK spdk_nvme_identify 00:03:20.910 CC examples/idxd/perf/perf.o 00:03:20.910 CC examples/sock/hello_world/hello_sock.o 00:03:20.910 CC examples/vmd/led/led.o 00:03:20.910 CC app/vhost/vhost.o 00:03:20.910 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.910 CC test/event/event_perf/event_perf.o 00:03:20.910 CC test/event/reactor_perf/reactor_perf.o 00:03:20.910 CC test/event/reactor/reactor.o 00:03:20.910 CC test/event/app_repeat/app_repeat.o 00:03:20.910 CC test/event/scheduler/scheduler.o 00:03:20.910 LINK vhost_fuzz 00:03:20.910 LINK spdk_nvme_perf 00:03:20.910 CC examples/thread/thread/thread_ex.o 00:03:20.910 LINK spdk_top 00:03:21.167 LINK mem_callbacks 00:03:21.167 LINK lsvmd 00:03:21.167 LINK led 00:03:21.167 CC test/nvme/startup/startup.o 00:03:21.167 CC test/nvme/err_injection/err_injection.o 00:03:21.167 CC test/nvme/fdp/fdp.o 00:03:21.167 LINK reactor_perf 00:03:21.167 CC test/nvme/sgl/sgl.o 00:03:21.167 LINK event_perf 00:03:21.167 CC test/nvme/aer/aer.o 00:03:21.167 CC test/nvme/reserve/reserve.o 00:03:21.167 CC test/nvme/connect_stress/connect_stress.o 00:03:21.167 CC test/nvme/overhead/overhead.o 00:03:21.167 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.167 CC test/nvme/e2edp/nvme_dp.o 00:03:21.167 CC test/nvme/compliance/nvme_compliance.o 00:03:21.167 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.167 CC test/nvme/simple_copy/simple_copy.o 00:03:21.167 CC test/nvme/reset/reset.o 00:03:21.167 CC test/nvme/boot_partition/boot_partition.o 00:03:21.167 LINK reactor 00:03:21.167 CC test/nvme/cuse/cuse.o 00:03:21.167 LINK vhost 00:03:21.167 CC test/accel/dif/dif.o 00:03:21.167 LINK app_repeat 00:03:21.167 CC test/blobfs/mkfs/mkfs.o 00:03:21.167 LINK hello_sock 00:03:21.167 LINK idxd_perf 00:03:21.167 LINK scheduler 00:03:21.167 CC test/lvol/esnap/esnap.o 00:03:21.167 LINK thread 00:03:21.167 LINK err_injection 00:03:21.167 LINK startup 00:03:21.167 LINK boot_partition 00:03:21.167 LINK memory_ut 00:03:21.167 LINK connect_stress 00:03:21.167 LINK doorbell_aers 00:03:21.424 LINK reserve 00:03:21.424 LINK fused_ordering 00:03:21.424 LINK sgl 00:03:21.424 LINK simple_copy 00:03:21.424 LINK aer 00:03:21.424 LINK overhead 00:03:21.424 LINK nvme_dp 00:03:21.424 LINK nvme_compliance 00:03:21.424 LINK mkfs 00:03:21.424 LINK reset 00:03:21.425 LINK fdp 00:03:21.425 LINK dif 00:03:21.425 CC examples/nvme/reconnect/reconnect.o 00:03:21.682 CC examples/nvme/hello_world/hello_world.o 00:03:21.682 CC examples/nvme/abort/abort.o 00:03:21.682 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.682 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.682 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.682 CC examples/nvme/arbitration/arbitration.o 00:03:21.682 CC examples/nvme/hotplug/hotplug.o 00:03:21.682 CC examples/accel/perf/accel_perf.o 00:03:21.682 LINK cmb_copy 00:03:21.682 CC examples/blob/cli/blobcli.o 00:03:21.682 CC examples/blob/hello_world/hello_blob.o 00:03:21.682 LINK hello_world 00:03:21.682 LINK pmr_persistence 00:03:21.940 LINK hotplug 00:03:21.940 LINK iscsi_fuzz 00:03:21.940 LINK reconnect 00:03:21.940 LINK arbitration 00:03:21.940 LINK abort 00:03:21.940 LINK nvme_manage 00:03:21.940 LINK hello_blob 00:03:21.940 CC test/bdev/bdevio/bdevio.o 00:03:21.940 LINK accel_perf 00:03:22.198 LINK cuse 00:03:22.198 LINK blobcli 00:03:22.455 LINK bdevio 00:03:22.455 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.455 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.711 LINK hello_bdev 00:03:22.969 LINK bdevperf 00:03:23.534 CC examples/nvmf/nvmf/nvmf.o 00:03:23.792 LINK nvmf 00:03:24.728 LINK esnap 00:03:24.988 00:03:24.988 real 0m44.795s 00:03:24.988 user 6m31.525s 00:03:24.988 sys 3m25.748s 00:03:24.988 11:14:08 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:24.988 11:14:08 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.988 ************************************ 00:03:24.988 END TEST make 00:03:24.988 ************************************ 00:03:24.988 11:14:08 -- common/autotest_common.sh@1142 -- $ return 0 00:03:24.988 11:14:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.988 11:14:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.988 11:14:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.988 11:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.988 11:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.988 11:14:08 -- pm/common@44 -- $ pid=312996 00:03:24.988 11:14:08 -- pm/common@50 -- $ kill -TERM 312996 00:03:24.988 11:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.988 11:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.988 11:14:08 -- pm/common@44 -- $ pid=312998 00:03:24.988 11:14:08 -- pm/common@50 -- $ kill -TERM 312998 00:03:24.988 11:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.988 11:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.988 11:14:08 -- pm/common@44 -- $ pid=312999 00:03:24.988 11:14:08 -- pm/common@50 -- $ kill -TERM 312999 00:03:24.988 11:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.988 11:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.988 11:14:08 -- pm/common@44 -- $ pid=313022 00:03:24.988 11:14:08 -- pm/common@50 -- $ sudo -E kill -TERM 313022 00:03:24.988 11:14:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:24.988 11:14:08 -- nvmf/common.sh@7 -- # uname -s 00:03:24.988 11:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.988 11:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.988 11:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.988 11:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.988 11:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.988 11:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.988 11:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.988 11:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.988 11:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.988 11:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.988 11:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:24.988 11:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:24.988 11:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.988 11:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.988 11:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:24.988 11:14:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.988 11:14:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:24.988 11:14:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.988 11:14:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.988 11:14:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.989 11:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.989 11:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.989 11:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.989 11:14:08 -- paths/export.sh@5 -- # export PATH 00:03:24.989 11:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.989 11:14:08 -- nvmf/common.sh@47 -- # : 0 00:03:24.989 11:14:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:24.989 11:14:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:24.989 11:14:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.989 11:14:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.989 11:14:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.248 11:14:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.248 11:14:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.248 11:14:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.248 11:14:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.248 11:14:08 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.248 11:14:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.248 11:14:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.248 11:14:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.248 11:14:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.248 11:14:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.248 11:14:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.248 11:14:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.248 11:14:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.248 11:14:08 -- spdk/autotest.sh@48 -- # udevadm_pid=372131 00:03:25.248 11:14:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.248 11:14:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.248 11:14:08 -- pm/common@17 -- # local monitor 00:03:25.248 11:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.248 11:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.248 11:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.248 11:14:08 -- pm/common@21 -- # date +%s 00:03:25.248 11:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.248 11:14:08 -- pm/common@21 -- # date +%s 00:03:25.248 11:14:08 -- pm/common@25 -- # sleep 1 00:03:25.248 11:14:08 -- pm/common@21 -- # date +%s 00:03:25.248 11:14:08 -- pm/common@21 -- # date +%s 00:03:25.248 11:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034848 00:03:25.248 11:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034848 00:03:25.248 11:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034848 00:03:25.248 11:14:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034848 00:03:25.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034848_collect-vmstat.pm.log 00:03:25.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034848_collect-cpu-load.pm.log 00:03:25.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034848_collect-cpu-temp.pm.log 00:03:25.248 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034848_collect-bmc-pm.bmc.pm.log 00:03:26.185 11:14:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.185 11:14:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.185 11:14:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:26.185 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:03:26.185 11:14:09 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.185 11:14:09 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:26.185 11:14:09 -- common/autotest_common.sh@10 -- # set +x 00:03:26.186 11:14:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:26.186 11:14:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.186 11:14:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.186 11:14:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:26.186 11:14:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.186 11:14:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.186 11:14:09 -- common/autotest_common.sh@1455 -- # uname 00:03:26.186 11:14:09 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.186 11:14:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.186 11:14:09 -- common/autotest_common.sh@1475 -- # uname 00:03:26.186 11:14:09 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.186 11:14:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:26.186 11:14:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:26.186 11:14:09 -- spdk/autotest.sh@72 -- # hash lcov 00:03:26.186 11:14:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:26.186 11:14:09 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:26.186 --rc lcov_branch_coverage=1 00:03:26.186 --rc lcov_function_coverage=1 00:03:26.186 --rc genhtml_branch_coverage=1 00:03:26.186 --rc genhtml_function_coverage=1 00:03:26.186 --rc genhtml_legend=1 00:03:26.186 --rc geninfo_all_blocks=1 00:03:26.186 ' 00:03:26.186 11:14:09 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:26.186 --rc lcov_branch_coverage=1 00:03:26.186 --rc lcov_function_coverage=1 00:03:26.186 --rc genhtml_branch_coverage=1 00:03:26.186 --rc genhtml_function_coverage=1 00:03:26.186 --rc genhtml_legend=1 00:03:26.186 --rc geninfo_all_blocks=1 00:03:26.186 ' 00:03:26.186 11:14:09 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:26.186 --rc lcov_branch_coverage=1 00:03:26.186 --rc lcov_function_coverage=1 00:03:26.186 --rc genhtml_branch_coverage=1 00:03:26.186 --rc genhtml_function_coverage=1 00:03:26.186 --rc genhtml_legend=1 00:03:26.186 --rc geninfo_all_blocks=1 00:03:26.186 --no-external' 00:03:26.186 11:14:09 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:26.186 --rc lcov_branch_coverage=1 00:03:26.186 --rc lcov_function_coverage=1 00:03:26.186 --rc genhtml_branch_coverage=1 00:03:26.186 --rc genhtml_function_coverage=1 00:03:26.186 --rc genhtml_legend=1 00:03:26.186 --rc geninfo_all_blocks=1 00:03:26.186 --no-external' 00:03:26.186 11:14:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:26.186 lcov: LCOV version 1.14 00:03:26.186 11:14:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:38.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.413 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:48.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:48.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:48.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:50.305 11:14:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:50.305 11:14:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.305 11:14:33 -- common/autotest_common.sh@10 -- # set +x 00:03:50.305 11:14:33 -- spdk/autotest.sh@91 -- # rm -f 00:03:50.305 11:14:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.595 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:53.595 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.595 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.595 11:14:36 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:53.595 11:14:36 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.595 11:14:36 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.595 11:14:36 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.595 11:14:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.595 11:14:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.595 11:14:36 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.595 11:14:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.595 11:14:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.595 11:14:36 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:53.595 11:14:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.595 11:14:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:53.595 11:14:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:53.595 11:14:36 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:53.595 11:14:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.595 No valid GPT data, bailing 00:03:53.595 11:14:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.595 11:14:36 -- scripts/common.sh@391 -- # pt= 00:03:53.595 11:14:36 -- scripts/common.sh@392 -- # return 1 00:03:53.595 11:14:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.595 1+0 records in 00:03:53.595 1+0 records out 00:03:53.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654053 s, 160 MB/s 00:03:53.595 11:14:36 -- spdk/autotest.sh@118 -- # sync 00:03:53.595 11:14:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.595 11:14:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.595 11:14:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.867 11:14:42 -- spdk/autotest.sh@124 -- # uname -s 00:03:58.867 11:14:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:58.867 11:14:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:58.867 11:14:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.867 11:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.867 11:14:42 -- common/autotest_common.sh@10 -- # set +x 00:03:58.867 ************************************ 00:03:58.867 START TEST setup.sh 00:03:58.867 ************************************ 00:03:58.867 11:14:42 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:58.867 * Looking for test storage... 00:03:58.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.867 11:14:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:58.867 11:14:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:58.867 11:14:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:58.867 11:14:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.867 11:14:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.867 11:14:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.867 ************************************ 00:03:58.867 START TEST acl 00:03:58.867 ************************************ 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:58.867 * Looking for test storage... 00:03:58.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.867 11:14:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:58.867 11:14:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:58.867 11:14:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.867 11:14:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.161 11:14:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:02.161 11:14:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:02.161 11:14:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.161 11:14:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:02.161 11:14:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.161 11:14:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:04.698 Hugepages 00:04:04.698 node hugesize free / total 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.698 00:04:04.698 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:04.698 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.699 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:04.957 11:14:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:04.957 11:14:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.957 11:14:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.957 11:14:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:04.957 ************************************ 00:04:04.957 START TEST denied 00:04:04.957 ************************************ 00:04:04.957 11:14:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:04.957 11:14:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:04.957 11:14:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:04.957 11:14:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:04.958 11:14:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.958 11:14:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.247 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:08.247 11:14:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.248 11:14:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.436 00:04:12.436 real 0m7.123s 00:04:12.436 user 0m2.323s 00:04:12.436 sys 0m4.069s 00:04:12.436 11:14:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.436 11:14:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:12.436 ************************************ 00:04:12.436 END TEST denied 00:04:12.436 ************************************ 00:04:12.436 11:14:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:12.436 11:14:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:12.436 11:14:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.436 11:14:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.436 11:14:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.436 ************************************ 00:04:12.436 START TEST allowed 00:04:12.436 ************************************ 00:04:12.436 11:14:55 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:12.436 11:14:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:12.436 11:14:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:12.436 11:14:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:12.436 11:14:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.436 11:14:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.691 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.691 11:14:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:16.691 11:14:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:16.691 11:14:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:16.691 11:14:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.691 11:14:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.230 00:04:19.230 real 0m7.034s 00:04:19.230 user 0m2.234s 00:04:19.230 sys 0m3.991s 00:04:19.230 11:15:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.230 11:15:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:19.230 ************************************ 00:04:19.230 END TEST allowed 00:04:19.230 ************************************ 00:04:19.230 11:15:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:19.230 00:04:19.230 real 0m20.408s 00:04:19.230 user 0m6.943s 00:04:19.230 sys 0m12.133s 00:04:19.230 11:15:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.230 11:15:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.230 ************************************ 00:04:19.230 END TEST acl 00:04:19.230 ************************************ 00:04:19.230 11:15:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:19.230 11:15:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:19.230 11:15:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.230 11:15:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.230 11:15:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.230 ************************************ 00:04:19.230 START TEST hugepages 00:04:19.230 ************************************ 00:04:19.230 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:19.230 * Looking for test storage... 00:04:19.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.230 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173314772 kB' 'MemAvailable: 176182948 kB' 'Buffers: 3896 kB' 'Cached: 10185656 kB' 'SwapCached: 0 kB' 'Active: 7205108 kB' 'Inactive: 3507356 kB' 'Active(anon): 6813100 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527256 kB' 'Mapped: 241108 kB' 'Shmem: 6290188 kB' 'KReclaimable: 226528 kB' 'Slab: 787828 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 561300 kB' 'KernelStack: 20512 kB' 'PageTables: 9464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8331020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315428 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.231 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.232 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.492 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:19.492 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.492 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.492 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.492 ************************************ 00:04:19.492 START TEST default_setup 00:04:19.492 ************************************ 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.492 11:15:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.031 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.031 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.290 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.233 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175457940 kB' 'MemAvailable: 178326116 kB' 'Buffers: 3896 kB' 'Cached: 10185756 kB' 'SwapCached: 0 kB' 'Active: 7217920 kB' 'Inactive: 3507356 kB' 'Active(anon): 6825912 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538936 kB' 'Mapped: 241064 kB' 'Shmem: 6290288 kB' 'KReclaimable: 226528 kB' 'Slab: 786588 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 560060 kB' 'KernelStack: 21008 kB' 'PageTables: 10704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8344504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315760 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.233 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.234 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175461456 kB' 'MemAvailable: 178329632 kB' 'Buffers: 3896 kB' 'Cached: 10185760 kB' 'SwapCached: 0 kB' 'Active: 7217596 kB' 'Inactive: 3507356 kB' 'Active(anon): 6825588 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538688 kB' 'Mapped: 241132 kB' 'Shmem: 6290292 kB' 'KReclaimable: 226528 kB' 'Slab: 786464 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559936 kB' 'KernelStack: 20832 kB' 'PageTables: 9880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8344524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315648 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.235 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175461244 kB' 'MemAvailable: 178329420 kB' 'Buffers: 3896 kB' 'Cached: 10185776 kB' 'SwapCached: 0 kB' 'Active: 7217636 kB' 'Inactive: 3507356 kB' 'Active(anon): 6825628 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538504 kB' 'Mapped: 241136 kB' 'Shmem: 6290308 kB' 'KReclaimable: 226528 kB' 'Slab: 786480 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559952 kB' 'KernelStack: 20720 kB' 'PageTables: 9816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8344544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315696 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.236 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.237 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.238 nr_hugepages=1024 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.238 resv_hugepages=0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.238 surplus_hugepages=0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.238 anon_hugepages=0 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175474412 kB' 'MemAvailable: 178342588 kB' 'Buffers: 3896 kB' 'Cached: 10185800 kB' 'SwapCached: 0 kB' 'Active: 7216996 kB' 'Inactive: 3507356 kB' 'Active(anon): 6824988 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538048 kB' 'Mapped: 241056 kB' 'Shmem: 6290332 kB' 'KReclaimable: 226528 kB' 'Slab: 786352 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559824 kB' 'KernelStack: 20592 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8344568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.238 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.239 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85475520 kB' 'MemUsed: 12187164 kB' 'SwapCached: 0 kB' 'Active: 5186824 kB' 'Inactive: 3337652 kB' 'Active(anon): 5029284 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8371848 kB' 'Mapped: 88128 kB' 'AnonPages: 155820 kB' 'Shmem: 4876656 kB' 'KernelStack: 11960 kB' 'PageTables: 4712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 404776 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 276600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.500 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.501 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.502 node0=1024 expecting 1024 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.502 00:04:23.502 real 0m3.984s 00:04:23.502 user 0m1.313s 00:04:23.502 sys 0m1.974s 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.502 11:15:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:23.502 ************************************ 00:04:23.502 END TEST default_setup 00:04:23.502 ************************************ 00:04:23.502 11:15:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:23.502 11:15:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:23.502 11:15:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.502 11:15:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.502 11:15:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.502 ************************************ 00:04:23.502 START TEST per_node_1G_alloc 00:04:23.502 ************************************ 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.502 11:15:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.038 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.039 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.039 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.299 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.299 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175492896 kB' 'MemAvailable: 178361072 kB' 'Buffers: 3896 kB' 'Cached: 10185896 kB' 'SwapCached: 0 kB' 'Active: 7213716 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821708 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534248 kB' 'Mapped: 240596 kB' 'Shmem: 6290428 kB' 'KReclaimable: 226528 kB' 'Slab: 786480 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559952 kB' 'KernelStack: 20624 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8337972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.300 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175495276 kB' 'MemAvailable: 178363452 kB' 'Buffers: 3896 kB' 'Cached: 10185900 kB' 'SwapCached: 0 kB' 'Active: 7213304 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821296 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534408 kB' 'Mapped: 240128 kB' 'Shmem: 6290432 kB' 'KReclaimable: 226528 kB' 'Slab: 786436 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559908 kB' 'KernelStack: 20736 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.301 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.302 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175495920 kB' 'MemAvailable: 178364096 kB' 'Buffers: 3896 kB' 'Cached: 10185916 kB' 'SwapCached: 0 kB' 'Active: 7213692 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534752 kB' 'Mapped: 240128 kB' 'Shmem: 6290448 kB' 'KReclaimable: 226528 kB' 'Slab: 786444 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559916 kB' 'KernelStack: 20624 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8340780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.303 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.304 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.305 nr_hugepages=1024 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.305 resv_hugepages=0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.305 surplus_hugepages=0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.305 anon_hugepages=0 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.305 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.567 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175495496 kB' 'MemAvailable: 178363672 kB' 'Buffers: 3896 kB' 'Cached: 10185916 kB' 'SwapCached: 0 kB' 'Active: 7213476 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821468 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534572 kB' 'Mapped: 240128 kB' 'Shmem: 6290448 kB' 'KReclaimable: 226528 kB' 'Slab: 786448 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559920 kB' 'KernelStack: 20704 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8338032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.568 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86524404 kB' 'MemUsed: 11138280 kB' 'SwapCached: 0 kB' 'Active: 5185444 kB' 'Inactive: 3337652 kB' 'Active(anon): 5027904 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8371880 kB' 'Mapped: 88112 kB' 'AnonPages: 154316 kB' 'Shmem: 4876688 kB' 'KernelStack: 11800 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 404708 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 276532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.569 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.570 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88970084 kB' 'MemUsed: 4748384 kB' 'SwapCached: 0 kB' 'Active: 2030588 kB' 'Inactive: 169704 kB' 'Active(anon): 1796120 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169704 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1818000 kB' 'Mapped: 152852 kB' 'AnonPages: 382336 kB' 'Shmem: 1413828 kB' 'KernelStack: 8728 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98352 kB' 'Slab: 381744 kB' 'SReclaimable: 98352 kB' 'SUnreclaim: 283392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.571 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.572 node0=512 expecting 512 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.572 node1=512 expecting 512 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.572 00:04:26.572 real 0m3.057s 00:04:26.572 user 0m1.218s 00:04:26.572 sys 0m1.909s 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.572 11:15:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.572 ************************************ 00:04:26.572 END TEST per_node_1G_alloc 00:04:26.572 ************************************ 00:04:26.572 11:15:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.572 11:15:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:26.572 11:15:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.573 11:15:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.573 11:15:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 ************************************ 00:04:26.573 START TEST even_2G_alloc 00:04:26.573 ************************************ 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.573 11:15:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.864 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.864 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.864 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.865 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522724 kB' 'MemAvailable: 178390900 kB' 'Buffers: 3896 kB' 'Cached: 10186056 kB' 'SwapCached: 0 kB' 'Active: 7210300 kB' 'Inactive: 3507356 kB' 'Active(anon): 6818292 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530592 kB' 'Mapped: 239180 kB' 'Shmem: 6290588 kB' 'KReclaimable: 226528 kB' 'Slab: 785396 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558868 kB' 'KernelStack: 20416 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8322384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315356 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522560 kB' 'MemAvailable: 178390736 kB' 'Buffers: 3896 kB' 'Cached: 10186060 kB' 'SwapCached: 0 kB' 'Active: 7209516 kB' 'Inactive: 3507356 kB' 'Active(anon): 6817508 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530308 kB' 'Mapped: 239080 kB' 'Shmem: 6290592 kB' 'KReclaimable: 226528 kB' 'Slab: 785388 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558860 kB' 'KernelStack: 20416 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8322400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315340 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.865 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522560 kB' 'MemAvailable: 178390736 kB' 'Buffers: 3896 kB' 'Cached: 10186076 kB' 'SwapCached: 0 kB' 'Active: 7209532 kB' 'Inactive: 3507356 kB' 'Active(anon): 6817524 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530308 kB' 'Mapped: 239080 kB' 'Shmem: 6290608 kB' 'KReclaimable: 226528 kB' 'Slab: 785388 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558860 kB' 'KernelStack: 20416 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8322420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315340 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.866 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.867 nr_hugepages=1024 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.867 resv_hugepages=0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.867 surplus_hugepages=0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.867 anon_hugepages=0 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.867 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522560 kB' 'MemAvailable: 178390736 kB' 'Buffers: 3896 kB' 'Cached: 10186100 kB' 'SwapCached: 0 kB' 'Active: 7209560 kB' 'Inactive: 3507356 kB' 'Active(anon): 6817552 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530308 kB' 'Mapped: 239080 kB' 'Shmem: 6290632 kB' 'KReclaimable: 226528 kB' 'Slab: 785388 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558860 kB' 'KernelStack: 20416 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8322444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315356 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.868 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86553436 kB' 'MemUsed: 11109248 kB' 'SwapCached: 0 kB' 'Active: 5180688 kB' 'Inactive: 3337652 kB' 'Active(anon): 5023148 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8371964 kB' 'Mapped: 87804 kB' 'AnonPages: 149596 kB' 'Shmem: 4876772 kB' 'KernelStack: 11736 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 403956 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 275780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88969548 kB' 'MemUsed: 4748920 kB' 'SwapCached: 0 kB' 'Active: 2028920 kB' 'Inactive: 169704 kB' 'Active(anon): 1794452 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169704 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1818076 kB' 'Mapped: 151276 kB' 'AnonPages: 380704 kB' 'Shmem: 1413904 kB' 'KernelStack: 8680 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98352 kB' 'Slab: 381432 kB' 'SReclaimable: 98352 kB' 'SUnreclaim: 283080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.869 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.870 node0=512 expecting 512 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:29.870 node1=512 expecting 512 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.870 00:04:29.870 real 0m3.047s 00:04:29.870 user 0m1.247s 00:04:29.870 sys 0m1.869s 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.870 11:15:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.870 ************************************ 00:04:29.870 END TEST even_2G_alloc 00:04:29.870 ************************************ 00:04:29.870 11:15:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.870 11:15:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:29.870 11:15:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.870 11:15:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.870 11:15:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.870 ************************************ 00:04:29.870 START TEST odd_alloc 00:04:29.870 ************************************ 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.870 11:15:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.406 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.406 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.406 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.406 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175517964 kB' 'MemAvailable: 178386140 kB' 'Buffers: 3896 kB' 'Cached: 10186204 kB' 'SwapCached: 0 kB' 'Active: 7212544 kB' 'Inactive: 3507356 kB' 'Active(anon): 6820536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533024 kB' 'Mapped: 239604 kB' 'Shmem: 6290736 kB' 'KReclaimable: 226528 kB' 'Slab: 785052 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558524 kB' 'KernelStack: 20416 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8325656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.672 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175513176 kB' 'MemAvailable: 178381352 kB' 'Buffers: 3896 kB' 'Cached: 10186208 kB' 'SwapCached: 0 kB' 'Active: 7216196 kB' 'Inactive: 3507356 kB' 'Active(anon): 6824188 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536712 kB' 'Mapped: 239872 kB' 'Shmem: 6290740 kB' 'KReclaimable: 226528 kB' 'Slab: 785032 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558504 kB' 'KernelStack: 20432 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8328860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315424 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.673 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.674 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175520892 kB' 'MemAvailable: 178389068 kB' 'Buffers: 3896 kB' 'Cached: 10186224 kB' 'SwapCached: 0 kB' 'Active: 7210320 kB' 'Inactive: 3507356 kB' 'Active(anon): 6818312 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530808 kB' 'Mapped: 239372 kB' 'Shmem: 6290756 kB' 'KReclaimable: 226528 kB' 'Slab: 785080 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558552 kB' 'KernelStack: 20416 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8322896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.675 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.676 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:32.677 nr_hugepages=1025 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.677 resv_hugepages=0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.677 surplus_hugepages=0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.677 anon_hugepages=0 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175521196 kB' 'MemAvailable: 178389372 kB' 'Buffers: 3896 kB' 'Cached: 10186276 kB' 'SwapCached: 0 kB' 'Active: 7210052 kB' 'Inactive: 3507356 kB' 'Active(anon): 6818044 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530580 kB' 'Mapped: 239088 kB' 'Shmem: 6290808 kB' 'KReclaimable: 226528 kB' 'Slab: 785072 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558544 kB' 'KernelStack: 20432 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8323288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.678 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86553068 kB' 'MemUsed: 11109616 kB' 'SwapCached: 0 kB' 'Active: 5179720 kB' 'Inactive: 3337652 kB' 'Active(anon): 5022180 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8371976 kB' 'Mapped: 87804 kB' 'AnonPages: 148532 kB' 'Shmem: 4876784 kB' 'KernelStack: 11720 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 403764 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 275588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88968128 kB' 'MemUsed: 4750340 kB' 'SwapCached: 0 kB' 'Active: 2030632 kB' 'Inactive: 169704 kB' 'Active(anon): 1796164 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169704 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1818220 kB' 'Mapped: 151284 kB' 'AnonPages: 382328 kB' 'Shmem: 1414048 kB' 'KernelStack: 8712 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98352 kB' 'Slab: 381308 kB' 'SReclaimable: 98352 kB' 'SUnreclaim: 282956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.681 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:32.682 node0=512 expecting 513 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:32.682 node1=513 expecting 512 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:32.682 00:04:32.682 real 0m3.042s 00:04:32.682 user 0m1.270s 00:04:32.682 sys 0m1.843s 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.682 11:15:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.682 ************************************ 00:04:32.682 END TEST odd_alloc 00:04:32.682 ************************************ 00:04:32.682 11:15:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.682 11:15:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:32.682 11:15:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.682 11:15:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.682 11:15:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.942 ************************************ 00:04:32.942 START TEST custom_alloc 00:04:32.942 ************************************ 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.942 11:15:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.479 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.479 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.479 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174473404 kB' 'MemAvailable: 177341580 kB' 'Buffers: 3896 kB' 'Cached: 10186356 kB' 'SwapCached: 0 kB' 'Active: 7211916 kB' 'Inactive: 3507356 kB' 'Active(anon): 6819908 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531824 kB' 'Mapped: 239188 kB' 'Shmem: 6290888 kB' 'KReclaimable: 226528 kB' 'Slab: 785156 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558628 kB' 'KernelStack: 20448 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8323624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.745 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.746 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174477116 kB' 'MemAvailable: 177345292 kB' 'Buffers: 3896 kB' 'Cached: 10186360 kB' 'SwapCached: 0 kB' 'Active: 7210772 kB' 'Inactive: 3507356 kB' 'Active(anon): 6818764 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531184 kB' 'Mapped: 239104 kB' 'Shmem: 6290892 kB' 'KReclaimable: 226528 kB' 'Slab: 785132 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558604 kB' 'KernelStack: 20416 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8323644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.747 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.748 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174478056 kB' 'MemAvailable: 177346232 kB' 'Buffers: 3896 kB' 'Cached: 10186376 kB' 'SwapCached: 0 kB' 'Active: 7210808 kB' 'Inactive: 3507356 kB' 'Active(anon): 6818800 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531108 kB' 'Mapped: 239104 kB' 'Shmem: 6290908 kB' 'KReclaimable: 226528 kB' 'Slab: 785132 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558604 kB' 'KernelStack: 20384 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8325912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.749 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:35.750 nr_hugepages=1536 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.750 resv_hugepages=0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.750 surplus_hugepages=0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.750 anon_hugepages=0 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.750 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174477728 kB' 'MemAvailable: 177345904 kB' 'Buffers: 3896 kB' 'Cached: 10186376 kB' 'SwapCached: 0 kB' 'Active: 7213428 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821420 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533764 kB' 'Mapped: 239632 kB' 'Shmem: 6290908 kB' 'KReclaimable: 226528 kB' 'Slab: 785132 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558604 kB' 'KernelStack: 20400 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8328580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.751 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86557012 kB' 'MemUsed: 11105672 kB' 'SwapCached: 0 kB' 'Active: 5180176 kB' 'Inactive: 3337652 kB' 'Active(anon): 5022636 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8371988 kB' 'Mapped: 87828 kB' 'AnonPages: 149004 kB' 'Shmem: 4876796 kB' 'KernelStack: 11848 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 403924 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 275748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.752 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.753 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87914276 kB' 'MemUsed: 5804192 kB' 'SwapCached: 0 kB' 'Active: 2030944 kB' 'Inactive: 169704 kB' 'Active(anon): 1796476 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169704 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1818348 kB' 'Mapped: 151300 kB' 'AnonPages: 382452 kB' 'Shmem: 1414176 kB' 'KernelStack: 8648 kB' 'PageTables: 4740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98352 kB' 'Slab: 381200 kB' 'SReclaimable: 98352 kB' 'SUnreclaim: 282848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.754 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.755 node0=512 expecting 512 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:35.755 node1=1024 expecting 1024 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:35.755 00:04:35.755 real 0m3.056s 00:04:35.755 user 0m1.240s 00:04:35.755 sys 0m1.882s 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.755 11:15:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.755 ************************************ 00:04:35.755 END TEST custom_alloc 00:04:35.755 ************************************ 00:04:36.044 11:15:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.044 11:15:19 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:36.044 11:15:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.044 11:15:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.044 11:15:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.044 ************************************ 00:04:36.044 START TEST no_shrink_alloc 00:04:36.044 ************************************ 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.044 11:15:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.581 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.581 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:38.581 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175495820 kB' 'MemAvailable: 178363996 kB' 'Buffers: 3896 kB' 'Cached: 10186512 kB' 'SwapCached: 0 kB' 'Active: 7213300 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821292 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534120 kB' 'Mapped: 239204 kB' 'Shmem: 6291044 kB' 'KReclaimable: 226528 kB' 'Slab: 785320 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558792 kB' 'KernelStack: 20256 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.846 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.847 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175498496 kB' 'MemAvailable: 178366672 kB' 'Buffers: 3896 kB' 'Cached: 10186516 kB' 'SwapCached: 0 kB' 'Active: 7213808 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821800 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534124 kB' 'Mapped: 239196 kB' 'Shmem: 6291048 kB' 'KReclaimable: 226528 kB' 'Slab: 785428 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558900 kB' 'KernelStack: 20544 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8327292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.848 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.849 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175497920 kB' 'MemAvailable: 178366096 kB' 'Buffers: 3896 kB' 'Cached: 10186516 kB' 'SwapCached: 0 kB' 'Active: 7214112 kB' 'Inactive: 3507356 kB' 'Active(anon): 6822104 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534444 kB' 'Mapped: 239188 kB' 'Shmem: 6291048 kB' 'KReclaimable: 226528 kB' 'Slab: 785392 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558864 kB' 'KernelStack: 20576 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8327316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.850 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.851 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.852 nr_hugepages=1024 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.852 resv_hugepages=0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.852 surplus_hugepages=0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.852 anon_hugepages=0 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175496916 kB' 'MemAvailable: 178365092 kB' 'Buffers: 3896 kB' 'Cached: 10186552 kB' 'SwapCached: 0 kB' 'Active: 7214152 kB' 'Inactive: 3507356 kB' 'Active(anon): 6822144 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534460 kB' 'Mapped: 239188 kB' 'Shmem: 6291084 kB' 'KReclaimable: 226528 kB' 'Slab: 785392 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 558864 kB' 'KernelStack: 20464 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.852 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.853 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85507984 kB' 'MemUsed: 12154700 kB' 'SwapCached: 0 kB' 'Active: 5181544 kB' 'Inactive: 3337652 kB' 'Active(anon): 5024004 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8372012 kB' 'Mapped: 87864 kB' 'AnonPages: 150288 kB' 'Shmem: 4876820 kB' 'KernelStack: 11960 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 404068 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 275892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.854 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.855 node0=1024 expecting 1024 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.855 11:15:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.145 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.145 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:42.145 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:42.145 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175502476 kB' 'MemAvailable: 178370652 kB' 'Buffers: 3896 kB' 'Cached: 10186652 kB' 'SwapCached: 0 kB' 'Active: 7215764 kB' 'Inactive: 3507356 kB' 'Active(anon): 6823756 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535432 kB' 'Mapped: 239236 kB' 'Shmem: 6291184 kB' 'KReclaimable: 226528 kB' 'Slab: 785856 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559328 kB' 'KernelStack: 20736 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8326476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.145 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.146 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175506964 kB' 'MemAvailable: 178375140 kB' 'Buffers: 3896 kB' 'Cached: 10186656 kB' 'SwapCached: 0 kB' 'Active: 7214144 kB' 'Inactive: 3507356 kB' 'Active(anon): 6822136 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534304 kB' 'Mapped: 239160 kB' 'Shmem: 6291188 kB' 'KReclaimable: 226528 kB' 'Slab: 785852 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559324 kB' 'KernelStack: 20496 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.147 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.148 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175506712 kB' 'MemAvailable: 178374888 kB' 'Buffers: 3896 kB' 'Cached: 10186656 kB' 'SwapCached: 0 kB' 'Active: 7214120 kB' 'Inactive: 3507356 kB' 'Active(anon): 6822112 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534348 kB' 'Mapped: 239136 kB' 'Shmem: 6291188 kB' 'KReclaimable: 226528 kB' 'Slab: 785792 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559264 kB' 'KernelStack: 20528 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.149 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.150 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.151 nr_hugepages=1024 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.151 resv_hugepages=0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.151 surplus_hugepages=0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.151 anon_hugepages=0 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175506460 kB' 'MemAvailable: 178374636 kB' 'Buffers: 3896 kB' 'Cached: 10186716 kB' 'SwapCached: 0 kB' 'Active: 7213816 kB' 'Inactive: 3507356 kB' 'Active(anon): 6821808 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533972 kB' 'Mapped: 239136 kB' 'Shmem: 6291248 kB' 'KReclaimable: 226528 kB' 'Slab: 785792 kB' 'SReclaimable: 226528 kB' 'SUnreclaim: 559264 kB' 'KernelStack: 20512 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2794452 kB' 'DirectMap2M: 14710784 kB' 'DirectMap1G: 184549376 kB' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.151 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.152 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85519904 kB' 'MemUsed: 12142780 kB' 'SwapCached: 0 kB' 'Active: 5182624 kB' 'Inactive: 3337652 kB' 'Active(anon): 5025084 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8372048 kB' 'Mapped: 87804 kB' 'AnonPages: 151432 kB' 'Shmem: 4876856 kB' 'KernelStack: 11848 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128176 kB' 'Slab: 404308 kB' 'SReclaimable: 128176 kB' 'SUnreclaim: 276132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.153 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.154 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.155 node0=1024 expecting 1024 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.155 00:04:42.155 real 0m5.965s 00:04:42.155 user 0m2.383s 00:04:42.155 sys 0m3.717s 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.155 11:15:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.155 ************************************ 00:04:42.155 END TEST no_shrink_alloc 00:04:42.155 ************************************ 00:04:42.155 11:15:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.155 11:15:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.155 00:04:42.155 real 0m22.691s 00:04:42.155 user 0m8.913s 00:04:42.155 sys 0m13.529s 00:04:42.155 11:15:25 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.155 11:15:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.155 ************************************ 00:04:42.155 END TEST hugepages 00:04:42.155 ************************************ 00:04:42.155 11:15:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.155 11:15:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.155 11:15:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.155 11:15:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.155 11:15:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.155 ************************************ 00:04:42.155 START TEST driver 00:04:42.155 ************************************ 00:04:42.155 11:15:25 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.155 * Looking for test storage... 00:04:42.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.155 11:15:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:42.155 11:15:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.155 11:15:25 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.349 11:15:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:46.349 11:15:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.349 11:15:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.349 11:15:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 ************************************ 00:04:46.349 START TEST guess_driver 00:04:46.349 ************************************ 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:46.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:46.349 Looking for driver=vfio-pci 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.349 11:15:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.880 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.140 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.141 11:15:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.079 11:15:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.270 00:04:54.270 real 0m7.942s 00:04:54.270 user 0m2.326s 00:04:54.270 sys 0m4.092s 00:04:54.270 11:15:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.270 11:15:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 END TEST guess_driver 00:04:54.270 ************************************ 00:04:54.270 11:15:37 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:54.270 00:04:54.270 real 0m12.207s 00:04:54.270 user 0m3.546s 00:04:54.270 sys 0m6.334s 00:04:54.270 11:15:37 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.270 11:15:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 END TEST driver 00:04:54.270 ************************************ 00:04:54.270 11:15:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:54.270 11:15:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.270 11:15:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.270 11:15:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.270 11:15:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 START TEST devices 00:04:54.270 ************************************ 00:04:54.270 11:15:37 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.270 * Looking for test storage... 00:04:54.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.270 11:15:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:54.270 11:15:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:54.270 11:15:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.270 11:15:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.559 11:15:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:57.559 11:15:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:57.559 11:15:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:57.559 11:15:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:57.559 No valid GPT data, bailing 00:04:57.559 11:15:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.559 11:15:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:57.559 11:15:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:57.559 11:15:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:57.559 11:15:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:57.559 11:15:41 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:57.559 11:15:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:57.559 11:15:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.559 11:15:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.559 11:15:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.559 ************************************ 00:04:57.559 START TEST nvme_mount 00:04:57.559 ************************************ 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.559 11:15:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:58.493 Creating new GPT entries in memory. 00:04:58.493 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.493 other utilities. 00:04:58.493 11:15:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.493 11:15:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.493 11:15:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.493 11:15:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.493 11:15:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:59.870 Creating new GPT entries in memory. 00:04:59.870 The operation has completed successfully. 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 404712 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.870 11:15:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.405 11:15:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:02.664 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.664 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.924 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:02.924 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:02.924 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.924 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.924 11:15:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:05.533 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.792 11:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:09.083 11:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.083 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.083 00:05:09.083 real 0m11.056s 00:05:09.083 user 0m3.253s 00:05:09.083 sys 0m5.642s 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.083 11:15:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:09.083 ************************************ 00:05:09.083 END TEST nvme_mount 00:05:09.083 ************************************ 00:05:09.083 11:15:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:09.083 11:15:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:09.083 11:15:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.083 11:15:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.083 11:15:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.083 ************************************ 00:05:09.083 START TEST dm_mount 00:05:09.083 ************************************ 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.083 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.084 11:15:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:09.653 Creating new GPT entries in memory. 00:05:09.653 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.653 other utilities. 00:05:09.653 11:15:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.653 11:15:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.653 11:15:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.653 11:15:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.653 11:15:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.030 Creating new GPT entries in memory. 00:05:11.030 The operation has completed successfully. 00:05:11.030 11:15:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.030 11:15:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.030 11:15:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.030 11:15:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.030 11:15:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:11.968 The operation has completed successfully. 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 408884 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.968 11:15:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:14.502 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.762 11:15:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.053 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:18.054 11:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:18.054 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:18.054 00:05:18.054 real 0m8.959s 00:05:18.054 user 0m2.236s 00:05:18.054 sys 0m3.767s 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.054 11:16:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:18.054 ************************************ 00:05:18.054 END TEST dm_mount 00:05:18.054 ************************************ 00:05:18.054 11:16:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.054 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:18.054 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:18.054 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.054 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.054 11:16:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:18.054 00:05:18.054 real 0m23.734s 00:05:18.054 user 0m6.819s 00:05:18.054 sys 0m11.679s 00:05:18.054 11:16:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.054 11:16:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.054 ************************************ 00:05:18.054 END TEST devices 00:05:18.054 ************************************ 00:05:18.054 11:16:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:18.054 00:05:18.054 real 1m19.418s 00:05:18.054 user 0m26.370s 00:05:18.054 sys 0m43.931s 00:05:18.054 11:16:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.054 11:16:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.054 ************************************ 00:05:18.054 END TEST setup.sh 00:05:18.054 ************************************ 00:05:18.054 11:16:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.054 11:16:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:21.340 Hugepages 00:05:21.340 node hugesize free / total 00:05:21.340 node0 1048576kB 0 / 0 00:05:21.340 node0 2048kB 2048 / 2048 00:05:21.340 node1 1048576kB 0 / 0 00:05:21.340 node1 2048kB 0 / 0 00:05:21.340 00:05:21.340 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.340 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:21.340 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:21.340 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:21.340 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:21.340 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:21.340 11:16:04 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.340 11:16:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.340 11:16:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.340 11:16:04 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.872 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:23.872 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.809 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.809 11:16:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:25.744 11:16:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:25.744 11:16:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:25.744 11:16:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.744 11:16:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:25.744 11:16:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:25.744 11:16:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:25.744 11:16:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.745 11:16:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.745 11:16:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:25.745 11:16:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:25.745 11:16:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:25.745 11:16:09 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.038 Waiting for block devices as requested 00:05:29.038 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.038 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.038 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.038 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.038 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.038 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.038 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.377 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.377 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.377 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.377 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.377 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.637 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.637 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.637 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.896 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.896 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.896 11:16:13 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:29.896 11:16:13 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:29.896 11:16:13 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:29.896 11:16:13 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:29.896 11:16:13 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:29.896 11:16:13 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:29.896 11:16:13 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:29.896 11:16:13 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:29.896 11:16:13 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:29.896 11:16:13 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:29.896 11:16:13 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:29.896 11:16:13 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:29.896 11:16:13 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:29.896 11:16:13 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:29.896 11:16:13 -- common/autotest_common.sh@1557 -- # continue 00:05:29.896 11:16:13 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:29.896 11:16:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.896 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 11:16:13 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:29.896 11:16:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.896 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.155 11:16:13 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:32.694 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:32.694 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:32.694 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:32.694 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:32.954 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:33.890 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.890 11:16:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:33.890 11:16:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.890 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:33.890 11:16:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:33.890 11:16:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:33.890 11:16:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.890 11:16:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:33.890 11:16:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:33.890 11:16:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:33.890 11:16:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.890 11:16:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.890 11:16:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.891 11:16:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.891 11:16:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.891 11:16:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:33.891 11:16:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:33.891 11:16:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.891 11:16:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:33.891 11:16:17 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:33.891 11:16:17 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:33.891 11:16:17 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:33.891 11:16:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:33.891 11:16:17 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:33.891 11:16:17 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=417697 00:05:33.891 11:16:17 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.891 11:16:17 -- common/autotest_common.sh@1598 -- # waitforlisten 417697 00:05:33.891 11:16:17 -- common/autotest_common.sh@829 -- # '[' -z 417697 ']' 00:05:33.891 11:16:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.891 11:16:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.891 11:16:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.891 11:16:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.891 11:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:34.150 [2024-07-15 11:16:17.503757] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:34.150 [2024-07-15 11:16:17.503813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417697 ] 00:05:34.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.150 [2024-07-15 11:16:17.573678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.150 [2024-07-15 11:16:17.647637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.718 11:16:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.718 11:16:18 -- common/autotest_common.sh@862 -- # return 0 00:05:34.718 11:16:18 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:34.718 11:16:18 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:34.718 11:16:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:38.003 nvme0n1 00:05:38.003 11:16:21 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:38.003 [2024-07-15 11:16:21.444400] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:38.003 request: 00:05:38.003 { 00:05:38.003 "nvme_ctrlr_name": "nvme0", 00:05:38.003 "password": "test", 00:05:38.003 "method": "bdev_nvme_opal_revert", 00:05:38.003 "req_id": 1 00:05:38.003 } 00:05:38.003 Got JSON-RPC error response 00:05:38.003 response: 00:05:38.003 { 00:05:38.003 "code": -32602, 00:05:38.003 "message": "Invalid parameters" 00:05:38.003 } 00:05:38.003 11:16:21 -- common/autotest_common.sh@1604 -- # true 00:05:38.003 11:16:21 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:38.003 11:16:21 -- common/autotest_common.sh@1608 -- # killprocess 417697 00:05:38.003 11:16:21 -- common/autotest_common.sh@948 -- # '[' -z 417697 ']' 00:05:38.003 11:16:21 -- common/autotest_common.sh@952 -- # kill -0 417697 00:05:38.003 11:16:21 -- common/autotest_common.sh@953 -- # uname 00:05:38.003 11:16:21 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.003 11:16:21 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 417697 00:05:38.003 11:16:21 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.003 11:16:21 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.003 11:16:21 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 417697' 00:05:38.003 killing process with pid 417697 00:05:38.004 11:16:21 -- common/autotest_common.sh@967 -- # kill 417697 00:05:38.004 11:16:21 -- common/autotest_common.sh@972 -- # wait 417697 00:05:39.907 11:16:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:39.907 11:16:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:39.907 11:16:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:39.907 11:16:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:39.907 11:16:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:39.907 11:16:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.907 11:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 11:16:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:39.907 11:16:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.907 11:16:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.907 11:16:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.907 11:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 START TEST env 00:05:39.907 ************************************ 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.907 * Looking for test storage... 00:05:39.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:39.907 11:16:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.907 11:16:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 START TEST env_memory 00:05:39.907 ************************************ 00:05:39.907 11:16:23 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.907 00:05:39.907 00:05:39.907 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.907 http://cunit.sourceforge.net/ 00:05:39.907 00:05:39.907 00:05:39.907 Suite: memory 00:05:39.907 Test: alloc and free memory map ...[2024-07-15 11:16:23.304120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.907 passed 00:05:39.907 Test: mem map translation ...[2024-07-15 11:16:23.323184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.907 [2024-07-15 11:16:23.323198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.907 [2024-07-15 11:16:23.323238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.907 [2024-07-15 11:16:23.323245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.907 passed 00:05:39.907 Test: mem map registration ...[2024-07-15 11:16:23.361919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.907 [2024-07-15 11:16:23.361936] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.907 passed 00:05:39.907 Test: mem map adjacent registrations ...passed 00:05:39.907 00:05:39.907 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.907 suites 1 1 n/a 0 0 00:05:39.907 tests 4 4 4 0 0 00:05:39.907 asserts 152 152 152 0 n/a 00:05:39.907 00:05:39.907 Elapsed time = 0.140 seconds 00:05:39.907 00:05:39.907 real 0m0.152s 00:05:39.907 user 0m0.145s 00:05:39.907 sys 0m0.007s 00:05:39.907 11:16:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.907 11:16:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 END TEST env_memory 00:05:39.907 ************************************ 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.907 11:16:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.907 11:16:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.907 11:16:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 START TEST env_vtophys 00:05:39.907 ************************************ 00:05:39.907 11:16:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.166 EAL: lib.eal log level changed from notice to debug 00:05:40.166 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.166 EAL: Detected lcore 1 as core 1 on socket 0 00:05:40.166 EAL: Detected lcore 2 as core 2 on socket 0 00:05:40.166 EAL: Detected lcore 3 as core 3 on socket 0 00:05:40.166 EAL: Detected lcore 4 as core 4 on socket 0 00:05:40.166 EAL: Detected lcore 5 as core 5 on socket 0 00:05:40.166 EAL: Detected lcore 6 as core 6 on socket 0 00:05:40.166 EAL: Detected lcore 7 as core 8 on socket 0 00:05:40.166 EAL: Detected lcore 8 as core 9 on socket 0 00:05:40.166 EAL: Detected lcore 9 as core 10 on socket 0 00:05:40.166 EAL: Detected lcore 10 as core 11 on socket 0 00:05:40.166 EAL: Detected lcore 11 as core 12 on socket 0 00:05:40.166 EAL: Detected lcore 12 as core 13 on socket 0 00:05:40.166 EAL: Detected lcore 13 as core 16 on socket 0 00:05:40.166 EAL: Detected lcore 14 as core 17 on socket 0 00:05:40.166 EAL: Detected lcore 15 as core 18 on socket 0 00:05:40.166 EAL: Detected lcore 16 as core 19 on socket 0 00:05:40.166 EAL: Detected lcore 17 as core 20 on socket 0 00:05:40.166 EAL: Detected lcore 18 as core 21 on socket 0 00:05:40.166 EAL: Detected lcore 19 as core 25 on socket 0 00:05:40.166 EAL: Detected lcore 20 as core 26 on socket 0 00:05:40.166 EAL: Detected lcore 21 as core 27 on socket 0 00:05:40.166 EAL: Detected lcore 22 as core 28 on socket 0 00:05:40.166 EAL: Detected lcore 23 as core 29 on socket 0 00:05:40.166 EAL: Detected lcore 24 as core 0 on socket 1 00:05:40.166 EAL: Detected lcore 25 as core 1 on socket 1 00:05:40.166 EAL: Detected lcore 26 as core 2 on socket 1 00:05:40.166 EAL: Detected lcore 27 as core 3 on socket 1 00:05:40.166 EAL: Detected lcore 28 as core 4 on socket 1 00:05:40.166 EAL: Detected lcore 29 as core 5 on socket 1 00:05:40.166 EAL: Detected lcore 30 as core 6 on socket 1 00:05:40.166 EAL: Detected lcore 31 as core 9 on socket 1 00:05:40.166 EAL: Detected lcore 32 as core 10 on socket 1 00:05:40.166 EAL: Detected lcore 33 as core 11 on socket 1 00:05:40.166 EAL: Detected lcore 34 as core 12 on socket 1 00:05:40.166 EAL: Detected lcore 35 as core 13 on socket 1 00:05:40.166 EAL: Detected lcore 36 as core 16 on socket 1 00:05:40.166 EAL: Detected lcore 37 as core 17 on socket 1 00:05:40.166 EAL: Detected lcore 38 as core 18 on socket 1 00:05:40.166 EAL: Detected lcore 39 as core 19 on socket 1 00:05:40.166 EAL: Detected lcore 40 as core 20 on socket 1 00:05:40.166 EAL: Detected lcore 41 as core 21 on socket 1 00:05:40.166 EAL: Detected lcore 42 as core 24 on socket 1 00:05:40.166 EAL: Detected lcore 43 as core 25 on socket 1 00:05:40.166 EAL: Detected lcore 44 as core 26 on socket 1 00:05:40.166 EAL: Detected lcore 45 as core 27 on socket 1 00:05:40.166 EAL: Detected lcore 46 as core 28 on socket 1 00:05:40.166 EAL: Detected lcore 47 as core 29 on socket 1 00:05:40.166 EAL: Detected lcore 48 as core 0 on socket 0 00:05:40.166 EAL: Detected lcore 49 as core 1 on socket 0 00:05:40.166 EAL: Detected lcore 50 as core 2 on socket 0 00:05:40.166 EAL: Detected lcore 51 as core 3 on socket 0 00:05:40.166 EAL: Detected lcore 52 as core 4 on socket 0 00:05:40.166 EAL: Detected lcore 53 as core 5 on socket 0 00:05:40.166 EAL: Detected lcore 54 as core 6 on socket 0 00:05:40.166 EAL: Detected lcore 55 as core 8 on socket 0 00:05:40.166 EAL: Detected lcore 56 as core 9 on socket 0 00:05:40.166 EAL: Detected lcore 57 as core 10 on socket 0 00:05:40.166 EAL: Detected lcore 58 as core 11 on socket 0 00:05:40.166 EAL: Detected lcore 59 as core 12 on socket 0 00:05:40.166 EAL: Detected lcore 60 as core 13 on socket 0 00:05:40.166 EAL: Detected lcore 61 as core 16 on socket 0 00:05:40.166 EAL: Detected lcore 62 as core 17 on socket 0 00:05:40.166 EAL: Detected lcore 63 as core 18 on socket 0 00:05:40.166 EAL: Detected lcore 64 as core 19 on socket 0 00:05:40.166 EAL: Detected lcore 65 as core 20 on socket 0 00:05:40.166 EAL: Detected lcore 66 as core 21 on socket 0 00:05:40.166 EAL: Detected lcore 67 as core 25 on socket 0 00:05:40.166 EAL: Detected lcore 68 as core 26 on socket 0 00:05:40.166 EAL: Detected lcore 69 as core 27 on socket 0 00:05:40.166 EAL: Detected lcore 70 as core 28 on socket 0 00:05:40.166 EAL: Detected lcore 71 as core 29 on socket 0 00:05:40.166 EAL: Detected lcore 72 as core 0 on socket 1 00:05:40.166 EAL: Detected lcore 73 as core 1 on socket 1 00:05:40.166 EAL: Detected lcore 74 as core 2 on socket 1 00:05:40.166 EAL: Detected lcore 75 as core 3 on socket 1 00:05:40.166 EAL: Detected lcore 76 as core 4 on socket 1 00:05:40.166 EAL: Detected lcore 77 as core 5 on socket 1 00:05:40.166 EAL: Detected lcore 78 as core 6 on socket 1 00:05:40.166 EAL: Detected lcore 79 as core 9 on socket 1 00:05:40.166 EAL: Detected lcore 80 as core 10 on socket 1 00:05:40.166 EAL: Detected lcore 81 as core 11 on socket 1 00:05:40.166 EAL: Detected lcore 82 as core 12 on socket 1 00:05:40.166 EAL: Detected lcore 83 as core 13 on socket 1 00:05:40.166 EAL: Detected lcore 84 as core 16 on socket 1 00:05:40.166 EAL: Detected lcore 85 as core 17 on socket 1 00:05:40.166 EAL: Detected lcore 86 as core 18 on socket 1 00:05:40.166 EAL: Detected lcore 87 as core 19 on socket 1 00:05:40.166 EAL: Detected lcore 88 as core 20 on socket 1 00:05:40.166 EAL: Detected lcore 89 as core 21 on socket 1 00:05:40.166 EAL: Detected lcore 90 as core 24 on socket 1 00:05:40.166 EAL: Detected lcore 91 as core 25 on socket 1 00:05:40.166 EAL: Detected lcore 92 as core 26 on socket 1 00:05:40.166 EAL: Detected lcore 93 as core 27 on socket 1 00:05:40.166 EAL: Detected lcore 94 as core 28 on socket 1 00:05:40.166 EAL: Detected lcore 95 as core 29 on socket 1 00:05:40.166 EAL: Maximum logical cores by configuration: 128 00:05:40.166 EAL: Detected CPU lcores: 96 00:05:40.166 EAL: Detected NUMA nodes: 2 00:05:40.166 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:40.166 EAL: Detected shared linkage of DPDK 00:05:40.166 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.166 EAL: Bus pci wants IOVA as 'DC' 00:05:40.166 EAL: Buses did not request a specific IOVA mode. 00:05:40.166 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:40.166 EAL: Selected IOVA mode 'VA' 00:05:40.166 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.166 EAL: Probing VFIO support... 00:05:40.166 EAL: IOMMU type 1 (Type 1) is supported 00:05:40.166 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:40.166 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:40.166 EAL: VFIO support initialized 00:05:40.166 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.166 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.166 EAL: Setting up physically contiguous memory... 00:05:40.166 EAL: Setting maximum number of open files to 524288 00:05:40.166 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.166 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:40.166 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.166 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:40.166 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.166 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:40.166 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.166 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.166 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:40.166 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:40.166 EAL: Hugepages will be freed exactly as allocated. 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: TSC frequency is ~2300000 KHz 00:05:40.166 EAL: Main lcore 0 is ready (tid=7fd921309a00;cpuset=[0]) 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 0 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.166 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.166 00:05:40.166 00:05:40.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.166 http://cunit.sourceforge.net/ 00:05:40.166 00:05:40.166 00:05:40.166 Suite: components_suite 00:05:40.166 Test: vtophys_malloc_test ...passed 00:05:40.166 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.166 EAL: Trying to obtain current memory policy. 00:05:40.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.166 EAL: Restoring previous memory policy: 4 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.166 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.166 EAL: request: mp_malloc_sync 00:05:40.166 EAL: No shared files mode enabled, IPC is disabled 00:05:40.167 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.167 EAL: Trying to obtain current memory policy. 00:05:40.167 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.167 EAL: Restoring previous memory policy: 4 00:05:40.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.167 EAL: request: mp_malloc_sync 00:05:40.167 EAL: No shared files mode enabled, IPC is disabled 00:05:40.167 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.167 EAL: request: mp_malloc_sync 00:05:40.167 EAL: No shared files mode enabled, IPC is disabled 00:05:40.167 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.167 EAL: Trying to obtain current memory policy. 00:05:40.167 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.167 EAL: Restoring previous memory policy: 4 00:05:40.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.167 EAL: request: mp_malloc_sync 00:05:40.167 EAL: No shared files mode enabled, IPC is disabled 00:05:40.167 EAL: Heap on socket 0 was expanded by 258MB 00:05:40.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.425 EAL: request: mp_malloc_sync 00:05:40.425 EAL: No shared files mode enabled, IPC is disabled 00:05:40.425 EAL: Heap on socket 0 was shrunk by 258MB 00:05:40.425 EAL: Trying to obtain current memory policy. 00:05:40.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.425 EAL: Restoring previous memory policy: 4 00:05:40.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.425 EAL: request: mp_malloc_sync 00:05:40.425 EAL: No shared files mode enabled, IPC is disabled 00:05:40.425 EAL: Heap on socket 0 was expanded by 514MB 00:05:40.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.684 EAL: request: mp_malloc_sync 00:05:40.685 EAL: No shared files mode enabled, IPC is disabled 00:05:40.685 EAL: Heap on socket 0 was shrunk by 514MB 00:05:40.685 EAL: Trying to obtain current memory policy. 00:05:40.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.685 EAL: Restoring previous memory policy: 4 00:05:40.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.685 EAL: request: mp_malloc_sync 00:05:40.685 EAL: No shared files mode enabled, IPC is disabled 00:05:40.685 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.203 EAL: request: mp_malloc_sync 00:05:41.203 EAL: No shared files mode enabled, IPC is disabled 00:05:41.203 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:41.203 passed 00:05:41.203 00:05:41.203 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.203 suites 1 1 n/a 0 0 00:05:41.203 tests 2 2 2 0 0 00:05:41.203 asserts 497 497 497 0 n/a 00:05:41.203 00:05:41.203 Elapsed time = 0.968 seconds 00:05:41.203 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.203 EAL: request: mp_malloc_sync 00:05:41.203 EAL: No shared files mode enabled, IPC is disabled 00:05:41.203 EAL: Heap on socket 0 was shrunk by 2MB 00:05:41.203 EAL: No shared files mode enabled, IPC is disabled 00:05:41.203 EAL: No shared files mode enabled, IPC is disabled 00:05:41.203 EAL: No shared files mode enabled, IPC is disabled 00:05:41.203 00:05:41.203 real 0m1.099s 00:05:41.203 user 0m0.636s 00:05:41.203 sys 0m0.429s 00:05:41.203 11:16:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.203 11:16:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 END TEST env_vtophys 00:05:41.203 ************************************ 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.203 11:16:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.203 11:16:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 START TEST env_pci 00:05:41.203 ************************************ 00:05:41.203 11:16:24 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.203 00:05:41.203 00:05:41.203 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.203 http://cunit.sourceforge.net/ 00:05:41.203 00:05:41.203 00:05:41.203 Suite: pci 00:05:41.203 Test: pci_hook ...[2024-07-15 11:16:24.667344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 419010 has claimed it 00:05:41.203 EAL: Cannot find device (10000:00:01.0) 00:05:41.203 EAL: Failed to attach device on primary process 00:05:41.203 passed 00:05:41.203 00:05:41.203 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.203 suites 1 1 n/a 0 0 00:05:41.203 tests 1 1 1 0 0 00:05:41.203 asserts 25 25 25 0 n/a 00:05:41.203 00:05:41.203 Elapsed time = 0.028 seconds 00:05:41.203 00:05:41.203 real 0m0.047s 00:05:41.203 user 0m0.012s 00:05:41.203 sys 0m0.034s 00:05:41.203 11:16:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.203 11:16:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 END TEST env_pci 00:05:41.203 ************************************ 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.203 11:16:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.203 11:16:24 env -- env/env.sh@15 -- # uname 00:05:41.203 11:16:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.203 11:16:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.203 11:16:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:41.203 11:16:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.203 11:16:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 START TEST env_dpdk_post_init 00:05:41.203 ************************************ 00:05:41.203 11:16:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.463 EAL: Detected CPU lcores: 96 00:05:41.463 EAL: Detected NUMA nodes: 2 00:05:41.463 EAL: Detected shared linkage of DPDK 00:05:41.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.463 EAL: Selected IOVA mode 'VA' 00:05:41.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.463 EAL: VFIO support initialized 00:05:41.463 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.463 EAL: Using IOMMU type 1 (Type 1) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:41.463 EAL: Ignore mapping IO port bar(1) 00:05:41.463 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:42.400 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:42.400 EAL: Ignore mapping IO port bar(1) 00:05:42.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:45.683 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:45.683 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:45.683 Starting DPDK initialization... 00:05:45.683 Starting SPDK post initialization... 00:05:45.683 SPDK NVMe probe 00:05:45.683 Attaching to 0000:5e:00.0 00:05:45.683 Attached to 0000:5e:00.0 00:05:45.683 Cleaning up... 00:05:45.683 00:05:45.683 real 0m4.354s 00:05:45.683 user 0m3.300s 00:05:45.683 sys 0m0.128s 00:05:45.683 11:16:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.683 11:16:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.683 ************************************ 00:05:45.683 END TEST env_dpdk_post_init 00:05:45.683 ************************************ 00:05:45.683 11:16:29 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.683 11:16:29 env -- env/env.sh@26 -- # uname 00:05:45.683 11:16:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.683 11:16:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.683 11:16:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.683 11:16:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.683 11:16:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.683 ************************************ 00:05:45.683 START TEST env_mem_callbacks 00:05:45.683 ************************************ 00:05:45.683 11:16:29 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.683 EAL: Detected CPU lcores: 96 00:05:45.683 EAL: Detected NUMA nodes: 2 00:05:45.683 EAL: Detected shared linkage of DPDK 00:05:45.683 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.683 EAL: Selected IOVA mode 'VA' 00:05:45.683 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.683 EAL: VFIO support initialized 00:05:45.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.683 00:05:45.683 00:05:45.683 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.683 http://cunit.sourceforge.net/ 00:05:45.683 00:05:45.683 00:05:45.683 Suite: memory 00:05:45.683 Test: test ... 00:05:45.683 register 0x200000200000 2097152 00:05:45.683 malloc 3145728 00:05:45.683 register 0x200000400000 4194304 00:05:45.683 buf 0x200000500000 len 3145728 PASSED 00:05:45.683 malloc 64 00:05:45.683 buf 0x2000004fff40 len 64 PASSED 00:05:45.683 malloc 4194304 00:05:45.683 register 0x200000800000 6291456 00:05:45.683 buf 0x200000a00000 len 4194304 PASSED 00:05:45.683 free 0x200000500000 3145728 00:05:45.683 free 0x2000004fff40 64 00:05:45.683 unregister 0x200000400000 4194304 PASSED 00:05:45.683 free 0x200000a00000 4194304 00:05:45.683 unregister 0x200000800000 6291456 PASSED 00:05:45.683 malloc 8388608 00:05:45.683 register 0x200000400000 10485760 00:05:45.683 buf 0x200000600000 len 8388608 PASSED 00:05:45.683 free 0x200000600000 8388608 00:05:45.683 unregister 0x200000400000 10485760 PASSED 00:05:45.683 passed 00:05:45.683 00:05:45.683 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.683 suites 1 1 n/a 0 0 00:05:45.683 tests 1 1 1 0 0 00:05:45.683 asserts 15 15 15 0 n/a 00:05:45.683 00:05:45.683 Elapsed time = 0.008 seconds 00:05:45.683 00:05:45.683 real 0m0.059s 00:05:45.683 user 0m0.020s 00:05:45.683 sys 0m0.039s 00:05:45.683 11:16:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.683 11:16:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:45.683 ************************************ 00:05:45.683 END TEST env_mem_callbacks 00:05:45.683 ************************************ 00:05:45.943 11:16:29 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.943 00:05:45.943 real 0m6.156s 00:05:45.943 user 0m4.279s 00:05:45.943 sys 0m0.947s 00:05:45.943 11:16:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.943 11:16:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.943 ************************************ 00:05:45.943 END TEST env 00:05:45.943 ************************************ 00:05:45.943 11:16:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.943 11:16:29 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.943 11:16:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.943 11:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.943 11:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:45.943 ************************************ 00:05:45.943 START TEST rpc 00:05:45.943 ************************************ 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.943 * Looking for test storage... 00:05:45.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.943 11:16:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=419832 00:05:45.943 11:16:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.943 11:16:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:45.943 11:16:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 419832 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@829 -- # '[' -z 419832 ']' 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.943 11:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.943 [2024-07-15 11:16:29.509739] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:45.943 [2024-07-15 11:16:29.509784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419832 ] 00:05:45.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.201 [2024-07-15 11:16:29.576127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.201 [2024-07-15 11:16:29.649085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.201 [2024-07-15 11:16:29.649127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 419832' to capture a snapshot of events at runtime. 00:05:46.202 [2024-07-15 11:16:29.649136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.202 [2024-07-15 11:16:29.649142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.202 [2024-07-15 11:16:29.649148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid419832 for offline analysis/debug. 00:05:46.202 [2024-07-15 11:16:29.649171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.768 11:16:30 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.768 11:16:30 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.768 11:16:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.768 11:16:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.768 11:16:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:46.768 11:16:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:46.768 11:16:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.768 11:16:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.768 11:16:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.768 ************************************ 00:05:46.768 START TEST rpc_integrity 00:05:46.768 ************************************ 00:05:46.768 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.768 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.768 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.768 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.768 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.768 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.768 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.027 { 00:05:47.027 "name": "Malloc0", 00:05:47.027 "aliases": [ 00:05:47.027 "909f95cb-5ad1-453e-8c1e-546486ca139f" 00:05:47.027 ], 00:05:47.027 "product_name": "Malloc disk", 00:05:47.027 "block_size": 512, 00:05:47.027 "num_blocks": 16384, 00:05:47.027 "uuid": "909f95cb-5ad1-453e-8c1e-546486ca139f", 00:05:47.027 "assigned_rate_limits": { 00:05:47.027 "rw_ios_per_sec": 0, 00:05:47.027 "rw_mbytes_per_sec": 0, 00:05:47.027 "r_mbytes_per_sec": 0, 00:05:47.027 "w_mbytes_per_sec": 0 00:05:47.027 }, 00:05:47.027 "claimed": false, 00:05:47.027 "zoned": false, 00:05:47.027 "supported_io_types": { 00:05:47.027 "read": true, 00:05:47.027 "write": true, 00:05:47.027 "unmap": true, 00:05:47.027 "flush": true, 00:05:47.027 "reset": true, 00:05:47.027 "nvme_admin": false, 00:05:47.027 "nvme_io": false, 00:05:47.027 "nvme_io_md": false, 00:05:47.027 "write_zeroes": true, 00:05:47.027 "zcopy": true, 00:05:47.027 "get_zone_info": false, 00:05:47.027 "zone_management": false, 00:05:47.027 "zone_append": false, 00:05:47.027 "compare": false, 00:05:47.027 "compare_and_write": false, 00:05:47.027 "abort": true, 00:05:47.027 "seek_hole": false, 00:05:47.027 "seek_data": false, 00:05:47.027 "copy": true, 00:05:47.027 "nvme_iov_md": false 00:05:47.027 }, 00:05:47.027 "memory_domains": [ 00:05:47.027 { 00:05:47.027 "dma_device_id": "system", 00:05:47.027 "dma_device_type": 1 00:05:47.027 }, 00:05:47.027 { 00:05:47.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.027 "dma_device_type": 2 00:05:47.027 } 00:05:47.027 ], 00:05:47.027 "driver_specific": {} 00:05:47.027 } 00:05:47.027 ]' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 [2024-07-15 11:16:30.469028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.027 [2024-07-15 11:16:30.469060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.027 [2024-07-15 11:16:30.469077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x183d2d0 00:05:47.027 [2024-07-15 11:16:30.469085] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.027 [2024-07-15 11:16:30.470389] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.027 [2024-07-15 11:16:30.470412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.027 Passthru0 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.027 { 00:05:47.027 "name": "Malloc0", 00:05:47.027 "aliases": [ 00:05:47.027 "909f95cb-5ad1-453e-8c1e-546486ca139f" 00:05:47.027 ], 00:05:47.027 "product_name": "Malloc disk", 00:05:47.027 "block_size": 512, 00:05:47.027 "num_blocks": 16384, 00:05:47.027 "uuid": "909f95cb-5ad1-453e-8c1e-546486ca139f", 00:05:47.027 "assigned_rate_limits": { 00:05:47.027 "rw_ios_per_sec": 0, 00:05:47.027 "rw_mbytes_per_sec": 0, 00:05:47.027 "r_mbytes_per_sec": 0, 00:05:47.027 "w_mbytes_per_sec": 0 00:05:47.027 }, 00:05:47.027 "claimed": true, 00:05:47.027 "claim_type": "exclusive_write", 00:05:47.027 "zoned": false, 00:05:47.027 "supported_io_types": { 00:05:47.027 "read": true, 00:05:47.027 "write": true, 00:05:47.027 "unmap": true, 00:05:47.027 "flush": true, 00:05:47.027 "reset": true, 00:05:47.027 "nvme_admin": false, 00:05:47.027 "nvme_io": false, 00:05:47.027 "nvme_io_md": false, 00:05:47.027 "write_zeroes": true, 00:05:47.027 "zcopy": true, 00:05:47.027 "get_zone_info": false, 00:05:47.027 "zone_management": false, 00:05:47.027 "zone_append": false, 00:05:47.027 "compare": false, 00:05:47.027 "compare_and_write": false, 00:05:47.027 "abort": true, 00:05:47.027 "seek_hole": false, 00:05:47.027 "seek_data": false, 00:05:47.027 "copy": true, 00:05:47.027 "nvme_iov_md": false 00:05:47.027 }, 00:05:47.027 "memory_domains": [ 00:05:47.027 { 00:05:47.027 "dma_device_id": "system", 00:05:47.027 "dma_device_type": 1 00:05:47.027 }, 00:05:47.027 { 00:05:47.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.027 "dma_device_type": 2 00:05:47.027 } 00:05:47.027 ], 00:05:47.027 "driver_specific": {} 00:05:47.027 }, 00:05:47.027 { 00:05:47.027 "name": "Passthru0", 00:05:47.027 "aliases": [ 00:05:47.027 "429e4afb-bd2c-51f3-9d6b-90c5e5cbc2f1" 00:05:47.027 ], 00:05:47.027 "product_name": "passthru", 00:05:47.027 "block_size": 512, 00:05:47.027 "num_blocks": 16384, 00:05:47.027 "uuid": "429e4afb-bd2c-51f3-9d6b-90c5e5cbc2f1", 00:05:47.027 "assigned_rate_limits": { 00:05:47.027 "rw_ios_per_sec": 0, 00:05:47.027 "rw_mbytes_per_sec": 0, 00:05:47.027 "r_mbytes_per_sec": 0, 00:05:47.027 "w_mbytes_per_sec": 0 00:05:47.027 }, 00:05:47.027 "claimed": false, 00:05:47.027 "zoned": false, 00:05:47.027 "supported_io_types": { 00:05:47.027 "read": true, 00:05:47.027 "write": true, 00:05:47.027 "unmap": true, 00:05:47.027 "flush": true, 00:05:47.027 "reset": true, 00:05:47.027 "nvme_admin": false, 00:05:47.027 "nvme_io": false, 00:05:47.027 "nvme_io_md": false, 00:05:47.027 "write_zeroes": true, 00:05:47.027 "zcopy": true, 00:05:47.027 "get_zone_info": false, 00:05:47.027 "zone_management": false, 00:05:47.027 "zone_append": false, 00:05:47.027 "compare": false, 00:05:47.027 "compare_and_write": false, 00:05:47.027 "abort": true, 00:05:47.027 "seek_hole": false, 00:05:47.027 "seek_data": false, 00:05:47.027 "copy": true, 00:05:47.027 "nvme_iov_md": false 00:05:47.027 }, 00:05:47.027 "memory_domains": [ 00:05:47.027 { 00:05:47.027 "dma_device_id": "system", 00:05:47.027 "dma_device_type": 1 00:05:47.027 }, 00:05:47.027 { 00:05:47.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.027 "dma_device_type": 2 00:05:47.027 } 00:05:47.027 ], 00:05:47.027 "driver_specific": { 00:05:47.027 "passthru": { 00:05:47.027 "name": "Passthru0", 00:05:47.027 "base_bdev_name": "Malloc0" 00:05:47.027 } 00:05:47.027 } 00:05:47.027 } 00:05:47.027 ]' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.027 11:16:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.027 00:05:47.027 real 0m0.269s 00:05:47.027 user 0m0.167s 00:05:47.027 sys 0m0.033s 00:05:47.028 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.028 11:16:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.028 ************************************ 00:05:47.028 END TEST rpc_integrity 00:05:47.028 ************************************ 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.286 11:16:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 ************************************ 00:05:47.286 START TEST rpc_plugins 00:05:47.286 ************************************ 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.286 { 00:05:47.286 "name": "Malloc1", 00:05:47.286 "aliases": [ 00:05:47.286 "6e21deb2-da6c-4ff1-a382-03882861ce94" 00:05:47.286 ], 00:05:47.286 "product_name": "Malloc disk", 00:05:47.286 "block_size": 4096, 00:05:47.286 "num_blocks": 256, 00:05:47.286 "uuid": "6e21deb2-da6c-4ff1-a382-03882861ce94", 00:05:47.286 "assigned_rate_limits": { 00:05:47.286 "rw_ios_per_sec": 0, 00:05:47.286 "rw_mbytes_per_sec": 0, 00:05:47.286 "r_mbytes_per_sec": 0, 00:05:47.286 "w_mbytes_per_sec": 0 00:05:47.286 }, 00:05:47.286 "claimed": false, 00:05:47.286 "zoned": false, 00:05:47.286 "supported_io_types": { 00:05:47.286 "read": true, 00:05:47.286 "write": true, 00:05:47.286 "unmap": true, 00:05:47.286 "flush": true, 00:05:47.286 "reset": true, 00:05:47.286 "nvme_admin": false, 00:05:47.286 "nvme_io": false, 00:05:47.286 "nvme_io_md": false, 00:05:47.286 "write_zeroes": true, 00:05:47.286 "zcopy": true, 00:05:47.286 "get_zone_info": false, 00:05:47.286 "zone_management": false, 00:05:47.286 "zone_append": false, 00:05:47.286 "compare": false, 00:05:47.286 "compare_and_write": false, 00:05:47.286 "abort": true, 00:05:47.286 "seek_hole": false, 00:05:47.286 "seek_data": false, 00:05:47.286 "copy": true, 00:05:47.286 "nvme_iov_md": false 00:05:47.286 }, 00:05:47.286 "memory_domains": [ 00:05:47.286 { 00:05:47.286 "dma_device_id": "system", 00:05:47.286 "dma_device_type": 1 00:05:47.286 }, 00:05:47.286 { 00:05:47.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.286 "dma_device_type": 2 00:05:47.286 } 00:05:47.286 ], 00:05:47.286 "driver_specific": {} 00:05:47.286 } 00:05:47.286 ]' 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:47.286 11:16:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:47.286 00:05:47.286 real 0m0.136s 00:05:47.286 user 0m0.086s 00:05:47.286 sys 0m0.019s 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.286 11:16:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.286 ************************************ 00:05:47.286 END TEST rpc_plugins 00:05:47.286 ************************************ 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.286 11:16:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.286 11:16:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.544 ************************************ 00:05:47.544 START TEST rpc_trace_cmd_test 00:05:47.544 ************************************ 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:47.544 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid419832", 00:05:47.544 "tpoint_group_mask": "0x8", 00:05:47.544 "iscsi_conn": { 00:05:47.544 "mask": "0x2", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "scsi": { 00:05:47.544 "mask": "0x4", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "bdev": { 00:05:47.544 "mask": "0x8", 00:05:47.544 "tpoint_mask": "0xffffffffffffffff" 00:05:47.544 }, 00:05:47.544 "nvmf_rdma": { 00:05:47.544 "mask": "0x10", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "nvmf_tcp": { 00:05:47.544 "mask": "0x20", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "ftl": { 00:05:47.544 "mask": "0x40", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "blobfs": { 00:05:47.544 "mask": "0x80", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "dsa": { 00:05:47.544 "mask": "0x200", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "thread": { 00:05:47.544 "mask": "0x400", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "nvme_pcie": { 00:05:47.544 "mask": "0x800", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "iaa": { 00:05:47.544 "mask": "0x1000", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "nvme_tcp": { 00:05:47.544 "mask": "0x2000", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "bdev_nvme": { 00:05:47.544 "mask": "0x4000", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 }, 00:05:47.544 "sock": { 00:05:47.544 "mask": "0x8000", 00:05:47.544 "tpoint_mask": "0x0" 00:05:47.544 } 00:05:47.544 }' 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.544 11:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.544 00:05:47.544 real 0m0.217s 00:05:47.544 user 0m0.184s 00:05:47.544 sys 0m0.023s 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.544 11:16:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.544 ************************************ 00:05:47.544 END TEST rpc_trace_cmd_test 00:05:47.544 ************************************ 00:05:47.544 11:16:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.544 11:16:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:47.544 11:16:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.544 11:16:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.544 11:16:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.544 11:16:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.544 11:16:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.802 ************************************ 00:05:47.802 START TEST rpc_daemon_integrity 00:05:47.802 ************************************ 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.802 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.802 { 00:05:47.802 "name": "Malloc2", 00:05:47.802 "aliases": [ 00:05:47.802 "0e9a71dc-a366-4f55-a3e2-1d5584267ab5" 00:05:47.802 ], 00:05:47.802 "product_name": "Malloc disk", 00:05:47.802 "block_size": 512, 00:05:47.802 "num_blocks": 16384, 00:05:47.802 "uuid": "0e9a71dc-a366-4f55-a3e2-1d5584267ab5", 00:05:47.802 "assigned_rate_limits": { 00:05:47.802 "rw_ios_per_sec": 0, 00:05:47.802 "rw_mbytes_per_sec": 0, 00:05:47.802 "r_mbytes_per_sec": 0, 00:05:47.802 "w_mbytes_per_sec": 0 00:05:47.802 }, 00:05:47.803 "claimed": false, 00:05:47.803 "zoned": false, 00:05:47.803 "supported_io_types": { 00:05:47.803 "read": true, 00:05:47.803 "write": true, 00:05:47.803 "unmap": true, 00:05:47.803 "flush": true, 00:05:47.803 "reset": true, 00:05:47.803 "nvme_admin": false, 00:05:47.803 "nvme_io": false, 00:05:47.803 "nvme_io_md": false, 00:05:47.803 "write_zeroes": true, 00:05:47.803 "zcopy": true, 00:05:47.803 "get_zone_info": false, 00:05:47.803 "zone_management": false, 00:05:47.803 "zone_append": false, 00:05:47.803 "compare": false, 00:05:47.803 "compare_and_write": false, 00:05:47.803 "abort": true, 00:05:47.803 "seek_hole": false, 00:05:47.803 "seek_data": false, 00:05:47.803 "copy": true, 00:05:47.803 "nvme_iov_md": false 00:05:47.803 }, 00:05:47.803 "memory_domains": [ 00:05:47.803 { 00:05:47.803 "dma_device_id": "system", 00:05:47.803 "dma_device_type": 1 00:05:47.803 }, 00:05:47.803 { 00:05:47.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.803 "dma_device_type": 2 00:05:47.803 } 00:05:47.803 ], 00:05:47.803 "driver_specific": {} 00:05:47.803 } 00:05:47.803 ]' 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.803 [2024-07-15 11:16:31.295298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:47.803 [2024-07-15 11:16:31.295328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.803 [2024-07-15 11:16:31.295344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19d4ac0 00:05:47.803 [2024-07-15 11:16:31.295354] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.803 [2024-07-15 11:16:31.296355] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.803 [2024-07-15 11:16:31.296378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.803 Passthru0 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.803 { 00:05:47.803 "name": "Malloc2", 00:05:47.803 "aliases": [ 00:05:47.803 "0e9a71dc-a366-4f55-a3e2-1d5584267ab5" 00:05:47.803 ], 00:05:47.803 "product_name": "Malloc disk", 00:05:47.803 "block_size": 512, 00:05:47.803 "num_blocks": 16384, 00:05:47.803 "uuid": "0e9a71dc-a366-4f55-a3e2-1d5584267ab5", 00:05:47.803 "assigned_rate_limits": { 00:05:47.803 "rw_ios_per_sec": 0, 00:05:47.803 "rw_mbytes_per_sec": 0, 00:05:47.803 "r_mbytes_per_sec": 0, 00:05:47.803 "w_mbytes_per_sec": 0 00:05:47.803 }, 00:05:47.803 "claimed": true, 00:05:47.803 "claim_type": "exclusive_write", 00:05:47.803 "zoned": false, 00:05:47.803 "supported_io_types": { 00:05:47.803 "read": true, 00:05:47.803 "write": true, 00:05:47.803 "unmap": true, 00:05:47.803 "flush": true, 00:05:47.803 "reset": true, 00:05:47.803 "nvme_admin": false, 00:05:47.803 "nvme_io": false, 00:05:47.803 "nvme_io_md": false, 00:05:47.803 "write_zeroes": true, 00:05:47.803 "zcopy": true, 00:05:47.803 "get_zone_info": false, 00:05:47.803 "zone_management": false, 00:05:47.803 "zone_append": false, 00:05:47.803 "compare": false, 00:05:47.803 "compare_and_write": false, 00:05:47.803 "abort": true, 00:05:47.803 "seek_hole": false, 00:05:47.803 "seek_data": false, 00:05:47.803 "copy": true, 00:05:47.803 "nvme_iov_md": false 00:05:47.803 }, 00:05:47.803 "memory_domains": [ 00:05:47.803 { 00:05:47.803 "dma_device_id": "system", 00:05:47.803 "dma_device_type": 1 00:05:47.803 }, 00:05:47.803 { 00:05:47.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.803 "dma_device_type": 2 00:05:47.803 } 00:05:47.803 ], 00:05:47.803 "driver_specific": {} 00:05:47.803 }, 00:05:47.803 { 00:05:47.803 "name": "Passthru0", 00:05:47.803 "aliases": [ 00:05:47.803 "cd45ef54-1db3-5a33-b6e0-20b009e2c363" 00:05:47.803 ], 00:05:47.803 "product_name": "passthru", 00:05:47.803 "block_size": 512, 00:05:47.803 "num_blocks": 16384, 00:05:47.803 "uuid": "cd45ef54-1db3-5a33-b6e0-20b009e2c363", 00:05:47.803 "assigned_rate_limits": { 00:05:47.803 "rw_ios_per_sec": 0, 00:05:47.803 "rw_mbytes_per_sec": 0, 00:05:47.803 "r_mbytes_per_sec": 0, 00:05:47.803 "w_mbytes_per_sec": 0 00:05:47.803 }, 00:05:47.803 "claimed": false, 00:05:47.803 "zoned": false, 00:05:47.803 "supported_io_types": { 00:05:47.803 "read": true, 00:05:47.803 "write": true, 00:05:47.803 "unmap": true, 00:05:47.803 "flush": true, 00:05:47.803 "reset": true, 00:05:47.803 "nvme_admin": false, 00:05:47.803 "nvme_io": false, 00:05:47.803 "nvme_io_md": false, 00:05:47.803 "write_zeroes": true, 00:05:47.803 "zcopy": true, 00:05:47.803 "get_zone_info": false, 00:05:47.803 "zone_management": false, 00:05:47.803 "zone_append": false, 00:05:47.803 "compare": false, 00:05:47.803 "compare_and_write": false, 00:05:47.803 "abort": true, 00:05:47.803 "seek_hole": false, 00:05:47.803 "seek_data": false, 00:05:47.803 "copy": true, 00:05:47.803 "nvme_iov_md": false 00:05:47.803 }, 00:05:47.803 "memory_domains": [ 00:05:47.803 { 00:05:47.803 "dma_device_id": "system", 00:05:47.803 "dma_device_type": 1 00:05:47.803 }, 00:05:47.803 { 00:05:47.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.803 "dma_device_type": 2 00:05:47.803 } 00:05:47.803 ], 00:05:47.803 "driver_specific": { 00:05:47.803 "passthru": { 00:05:47.803 "name": "Passthru0", 00:05:47.803 "base_bdev_name": "Malloc2" 00:05:47.803 } 00:05:47.803 } 00:05:47.803 } 00:05:47.803 ]' 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.803 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.062 00:05:48.062 real 0m0.272s 00:05:48.062 user 0m0.175s 00:05:48.062 sys 0m0.037s 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.062 11:16:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.062 ************************************ 00:05:48.062 END TEST rpc_daemon_integrity 00:05:48.062 ************************************ 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.062 11:16:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.062 11:16:31 rpc -- rpc/rpc.sh@84 -- # killprocess 419832 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@948 -- # '[' -z 419832 ']' 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@952 -- # kill -0 419832 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 419832 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 419832' 00:05:48.062 killing process with pid 419832 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@967 -- # kill 419832 00:05:48.062 11:16:31 rpc -- common/autotest_common.sh@972 -- # wait 419832 00:05:48.321 00:05:48.321 real 0m2.451s 00:05:48.321 user 0m3.138s 00:05:48.321 sys 0m0.684s 00:05:48.321 11:16:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.321 11:16:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.321 ************************************ 00:05:48.321 END TEST rpc 00:05:48.321 ************************************ 00:05:48.321 11:16:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.321 11:16:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.321 11:16:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.321 11:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.321 11:16:31 -- common/autotest_common.sh@10 -- # set +x 00:05:48.321 ************************************ 00:05:48.321 START TEST skip_rpc 00:05:48.321 ************************************ 00:05:48.321 11:16:31 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:48.579 * Looking for test storage... 00:05:48.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.579 11:16:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.579 11:16:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:48.579 11:16:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:48.579 11:16:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.579 11:16:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.579 11:16:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.579 ************************************ 00:05:48.579 START TEST skip_rpc 00:05:48.579 ************************************ 00:05:48.579 11:16:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:48.579 11:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=420467 00:05:48.579 11:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.579 11:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:48.579 11:16:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.579 [2024-07-15 11:16:32.052635] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:48.580 [2024-07-15 11:16:32.052679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420467 ] 00:05:48.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.580 [2024-07-15 11:16:32.120916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.838 [2024-07-15 11:16:32.197213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.162 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 420467 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 420467 ']' 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 420467 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 420467 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 420467' 00:05:54.163 killing process with pid 420467 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 420467 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 420467 00:05:54.163 00:05:54.163 real 0m5.360s 00:05:54.163 user 0m5.118s 00:05:54.163 sys 0m0.270s 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.163 11:16:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.163 ************************************ 00:05:54.163 END TEST skip_rpc 00:05:54.163 ************************************ 00:05:54.163 11:16:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:54.163 11:16:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.163 11:16:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.163 11:16:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.163 11:16:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.163 ************************************ 00:05:54.163 START TEST skip_rpc_with_json 00:05:54.163 ************************************ 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=421418 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 421418 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 421418 ']' 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.163 11:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.163 [2024-07-15 11:16:37.478729] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:54.163 [2024-07-15 11:16:37.478770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421418 ] 00:05:54.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.163 [2024-07-15 11:16:37.544470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.163 [2024-07-15 11:16:37.615806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 [2024-07-15 11:16:38.289804] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.730 request: 00:05:54.730 { 00:05:54.730 "trtype": "tcp", 00:05:54.730 "method": "nvmf_get_transports", 00:05:54.730 "req_id": 1 00:05:54.730 } 00:05:54.730 Got JSON-RPC error response 00:05:54.730 response: 00:05:54.730 { 00:05:54.730 "code": -19, 00:05:54.730 "message": "No such device" 00:05:54.730 } 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 [2024-07-15 11:16:38.301914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.730 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.989 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.989 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.989 { 00:05:54.989 "subsystems": [ 00:05:54.989 { 00:05:54.989 "subsystem": "vfio_user_target", 00:05:54.989 "config": null 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "keyring", 00:05:54.989 "config": [] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "iobuf", 00:05:54.989 "config": [ 00:05:54.989 { 00:05:54.989 "method": "iobuf_set_options", 00:05:54.989 "params": { 00:05:54.989 "small_pool_count": 8192, 00:05:54.989 "large_pool_count": 1024, 00:05:54.989 "small_bufsize": 8192, 00:05:54.989 "large_bufsize": 135168 00:05:54.989 } 00:05:54.989 } 00:05:54.989 ] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "sock", 00:05:54.989 "config": [ 00:05:54.989 { 00:05:54.989 "method": "sock_set_default_impl", 00:05:54.989 "params": { 00:05:54.989 "impl_name": "posix" 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "sock_impl_set_options", 00:05:54.989 "params": { 00:05:54.989 "impl_name": "ssl", 00:05:54.989 "recv_buf_size": 4096, 00:05:54.989 "send_buf_size": 4096, 00:05:54.989 "enable_recv_pipe": true, 00:05:54.989 "enable_quickack": false, 00:05:54.989 "enable_placement_id": 0, 00:05:54.989 "enable_zerocopy_send_server": true, 00:05:54.989 "enable_zerocopy_send_client": false, 00:05:54.989 "zerocopy_threshold": 0, 00:05:54.989 "tls_version": 0, 00:05:54.989 "enable_ktls": false 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "sock_impl_set_options", 00:05:54.989 "params": { 00:05:54.989 "impl_name": "posix", 00:05:54.989 "recv_buf_size": 2097152, 00:05:54.989 "send_buf_size": 2097152, 00:05:54.989 "enable_recv_pipe": true, 00:05:54.989 "enable_quickack": false, 00:05:54.989 "enable_placement_id": 0, 00:05:54.989 "enable_zerocopy_send_server": true, 00:05:54.989 "enable_zerocopy_send_client": false, 00:05:54.989 "zerocopy_threshold": 0, 00:05:54.989 "tls_version": 0, 00:05:54.989 "enable_ktls": false 00:05:54.989 } 00:05:54.989 } 00:05:54.989 ] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "vmd", 00:05:54.989 "config": [] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "accel", 00:05:54.989 "config": [ 00:05:54.989 { 00:05:54.989 "method": "accel_set_options", 00:05:54.989 "params": { 00:05:54.989 "small_cache_size": 128, 00:05:54.989 "large_cache_size": 16, 00:05:54.989 "task_count": 2048, 00:05:54.989 "sequence_count": 2048, 00:05:54.989 "buf_count": 2048 00:05:54.989 } 00:05:54.989 } 00:05:54.989 ] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "bdev", 00:05:54.989 "config": [ 00:05:54.989 { 00:05:54.989 "method": "bdev_set_options", 00:05:54.989 "params": { 00:05:54.989 "bdev_io_pool_size": 65535, 00:05:54.989 "bdev_io_cache_size": 256, 00:05:54.989 "bdev_auto_examine": true, 00:05:54.989 "iobuf_small_cache_size": 128, 00:05:54.989 "iobuf_large_cache_size": 16 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "bdev_raid_set_options", 00:05:54.989 "params": { 00:05:54.989 "process_window_size_kb": 1024 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "bdev_iscsi_set_options", 00:05:54.989 "params": { 00:05:54.989 "timeout_sec": 30 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "bdev_nvme_set_options", 00:05:54.989 "params": { 00:05:54.989 "action_on_timeout": "none", 00:05:54.989 "timeout_us": 0, 00:05:54.989 "timeout_admin_us": 0, 00:05:54.989 "keep_alive_timeout_ms": 10000, 00:05:54.989 "arbitration_burst": 0, 00:05:54.989 "low_priority_weight": 0, 00:05:54.989 "medium_priority_weight": 0, 00:05:54.989 "high_priority_weight": 0, 00:05:54.989 "nvme_adminq_poll_period_us": 10000, 00:05:54.989 "nvme_ioq_poll_period_us": 0, 00:05:54.989 "io_queue_requests": 0, 00:05:54.989 "delay_cmd_submit": true, 00:05:54.989 "transport_retry_count": 4, 00:05:54.989 "bdev_retry_count": 3, 00:05:54.989 "transport_ack_timeout": 0, 00:05:54.989 "ctrlr_loss_timeout_sec": 0, 00:05:54.989 "reconnect_delay_sec": 0, 00:05:54.989 "fast_io_fail_timeout_sec": 0, 00:05:54.989 "disable_auto_failback": false, 00:05:54.989 "generate_uuids": false, 00:05:54.989 "transport_tos": 0, 00:05:54.989 "nvme_error_stat": false, 00:05:54.989 "rdma_srq_size": 0, 00:05:54.989 "io_path_stat": false, 00:05:54.989 "allow_accel_sequence": false, 00:05:54.989 "rdma_max_cq_size": 0, 00:05:54.989 "rdma_cm_event_timeout_ms": 0, 00:05:54.989 "dhchap_digests": [ 00:05:54.989 "sha256", 00:05:54.989 "sha384", 00:05:54.989 "sha512" 00:05:54.989 ], 00:05:54.989 "dhchap_dhgroups": [ 00:05:54.989 "null", 00:05:54.989 "ffdhe2048", 00:05:54.989 "ffdhe3072", 00:05:54.989 "ffdhe4096", 00:05:54.989 "ffdhe6144", 00:05:54.989 "ffdhe8192" 00:05:54.989 ] 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "bdev_nvme_set_hotplug", 00:05:54.989 "params": { 00:05:54.989 "period_us": 100000, 00:05:54.989 "enable": false 00:05:54.989 } 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "method": "bdev_wait_for_examine" 00:05:54.989 } 00:05:54.989 ] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "scsi", 00:05:54.989 "config": null 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "scheduler", 00:05:54.989 "config": [ 00:05:54.989 { 00:05:54.989 "method": "framework_set_scheduler", 00:05:54.989 "params": { 00:05:54.989 "name": "static" 00:05:54.989 } 00:05:54.989 } 00:05:54.989 ] 00:05:54.989 }, 00:05:54.989 { 00:05:54.989 "subsystem": "vhost_scsi", 00:05:54.989 "config": [] 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "subsystem": "vhost_blk", 00:05:54.990 "config": [] 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "subsystem": "ublk", 00:05:54.990 "config": [] 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "subsystem": "nbd", 00:05:54.990 "config": [] 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "subsystem": "nvmf", 00:05:54.990 "config": [ 00:05:54.990 { 00:05:54.990 "method": "nvmf_set_config", 00:05:54.990 "params": { 00:05:54.990 "discovery_filter": "match_any", 00:05:54.990 "admin_cmd_passthru": { 00:05:54.990 "identify_ctrlr": false 00:05:54.990 } 00:05:54.990 } 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "method": "nvmf_set_max_subsystems", 00:05:54.990 "params": { 00:05:54.990 "max_subsystems": 1024 00:05:54.990 } 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "method": "nvmf_set_crdt", 00:05:54.990 "params": { 00:05:54.990 "crdt1": 0, 00:05:54.990 "crdt2": 0, 00:05:54.990 "crdt3": 0 00:05:54.990 } 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "method": "nvmf_create_transport", 00:05:54.990 "params": { 00:05:54.990 "trtype": "TCP", 00:05:54.990 "max_queue_depth": 128, 00:05:54.990 "max_io_qpairs_per_ctrlr": 127, 00:05:54.990 "in_capsule_data_size": 4096, 00:05:54.990 "max_io_size": 131072, 00:05:54.990 "io_unit_size": 131072, 00:05:54.990 "max_aq_depth": 128, 00:05:54.990 "num_shared_buffers": 511, 00:05:54.990 "buf_cache_size": 4294967295, 00:05:54.990 "dif_insert_or_strip": false, 00:05:54.990 "zcopy": false, 00:05:54.990 "c2h_success": true, 00:05:54.990 "sock_priority": 0, 00:05:54.990 "abort_timeout_sec": 1, 00:05:54.990 "ack_timeout": 0, 00:05:54.990 "data_wr_pool_size": 0 00:05:54.990 } 00:05:54.990 } 00:05:54.990 ] 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "subsystem": "iscsi", 00:05:54.990 "config": [ 00:05:54.990 { 00:05:54.990 "method": "iscsi_set_options", 00:05:54.990 "params": { 00:05:54.990 "node_base": "iqn.2016-06.io.spdk", 00:05:54.990 "max_sessions": 128, 00:05:54.990 "max_connections_per_session": 2, 00:05:54.990 "max_queue_depth": 64, 00:05:54.990 "default_time2wait": 2, 00:05:54.990 "default_time2retain": 20, 00:05:54.990 "first_burst_length": 8192, 00:05:54.990 "immediate_data": true, 00:05:54.990 "allow_duplicated_isid": false, 00:05:54.990 "error_recovery_level": 0, 00:05:54.990 "nop_timeout": 60, 00:05:54.990 "nop_in_interval": 30, 00:05:54.990 "disable_chap": false, 00:05:54.990 "require_chap": false, 00:05:54.990 "mutual_chap": false, 00:05:54.990 "chap_group": 0, 00:05:54.990 "max_large_datain_per_connection": 64, 00:05:54.990 "max_r2t_per_connection": 4, 00:05:54.990 "pdu_pool_size": 36864, 00:05:54.990 "immediate_data_pool_size": 16384, 00:05:54.990 "data_out_pool_size": 2048 00:05:54.990 } 00:05:54.990 } 00:05:54.990 ] 00:05:54.990 } 00:05:54.990 ] 00:05:54.990 } 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 421418 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 421418 ']' 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 421418 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 421418 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 421418' 00:05:54.990 killing process with pid 421418 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 421418 00:05:54.990 11:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 421418 00:05:55.249 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=421654 00:05:55.249 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.249 11:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 421654 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 421654 ']' 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 421654 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 421654 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 421654' 00:06:00.514 killing process with pid 421654 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 421654 00:06:00.514 11:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 421654 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:00.773 00:06:00.773 real 0m6.749s 00:06:00.773 user 0m6.568s 00:06:00.773 sys 0m0.607s 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 END TEST skip_rpc_with_json 00:06:00.773 ************************************ 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.773 11:16:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 START TEST skip_rpc_with_delay 00:06:00.773 ************************************ 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.773 [2024-07-15 11:16:44.302196] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.773 [2024-07-15 11:16:44.302264] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.773 00:06:00.773 real 0m0.069s 00:06:00.773 user 0m0.039s 00:06:00.773 sys 0m0.030s 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.773 11:16:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 END TEST skip_rpc_with_delay 00:06:00.773 ************************************ 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.773 11:16:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.773 11:16:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.773 11:16:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.773 11:16:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 ************************************ 00:06:01.032 START TEST exit_on_failed_rpc_init 00:06:01.032 ************************************ 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=422625 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 422625 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 422625 ']' 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.032 11:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 [2024-07-15 11:16:44.426759] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:01.032 [2024-07-15 11:16:44.426802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422625 ] 00:06:01.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.032 [2024-07-15 11:16:44.495501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.032 [2024-07-15 11:16:44.563342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.966 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.966 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:01.966 11:16:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.966 11:16:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.966 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.967 [2024-07-15 11:16:45.273939] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:01.967 [2024-07-15 11:16:45.273985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422858 ] 00:06:01.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.967 [2024-07-15 11:16:45.336749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.967 [2024-07-15 11:16:45.410430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.967 [2024-07-15 11:16:45.410495] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.967 [2024-07-15 11:16:45.410505] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.967 [2024-07-15 11:16:45.410511] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 422625 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 422625 ']' 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 422625 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 422625 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 422625' 00:06:01.967 killing process with pid 422625 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 422625 00:06:01.967 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 422625 00:06:02.533 00:06:02.533 real 0m1.465s 00:06:02.533 user 0m1.676s 00:06:02.533 sys 0m0.417s 00:06:02.533 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.533 11:16:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.533 ************************************ 00:06:02.533 END TEST exit_on_failed_rpc_init 00:06:02.533 ************************************ 00:06:02.533 11:16:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:02.533 11:16:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.533 00:06:02.533 real 0m13.995s 00:06:02.533 user 0m13.539s 00:06:02.533 sys 0m1.564s 00:06:02.533 11:16:45 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.533 11:16:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.533 ************************************ 00:06:02.533 END TEST skip_rpc 00:06:02.533 ************************************ 00:06:02.533 11:16:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.533 11:16:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.533 11:16:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.533 11:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.533 11:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.533 ************************************ 00:06:02.533 START TEST rpc_client 00:06:02.533 ************************************ 00:06:02.533 11:16:45 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.533 * Looking for test storage... 00:06:02.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:02.533 11:16:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:02.533 OK 00:06:02.533 11:16:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.533 00:06:02.533 real 0m0.106s 00:06:02.533 user 0m0.052s 00:06:02.533 sys 0m0.061s 00:06:02.533 11:16:46 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.533 11:16:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.533 ************************************ 00:06:02.533 END TEST rpc_client 00:06:02.533 ************************************ 00:06:02.533 11:16:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.533 11:16:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:02.533 11:16:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.533 11:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.533 11:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:02.533 ************************************ 00:06:02.533 START TEST json_config 00:06:02.533 ************************************ 00:06:02.533 11:16:46 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:02.792 11:16:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.792 11:16:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.793 11:16:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.793 11:16:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.793 11:16:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.793 11:16:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.793 11:16:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.793 11:16:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.793 11:16:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.793 11:16:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@47 -- # : 0 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.793 11:16:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:02.793 INFO: JSON configuration test init 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 11:16:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:02.793 11:16:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.793 11:16:46 json_config -- json_config/common.sh@10 -- # shift 00:06:02.793 11:16:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.793 11:16:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.793 11:16:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.793 11:16:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.793 11:16:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.793 11:16:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=422993 00:06:02.793 11:16:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.793 Waiting for target to run... 00:06:02.793 11:16:46 json_config -- json_config/common.sh@25 -- # waitforlisten 422993 /var/tmp/spdk_tgt.sock 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 422993 ']' 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.793 11:16:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.793 11:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 [2024-07-15 11:16:46.269036] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:02.793 [2024-07-15 11:16:46.269082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422993 ] 00:06:02.793 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.360 [2024-07-15 11:16:46.716664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.360 [2024-07-15 11:16:46.810277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:03.618 11:16:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.618 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.618 11:16:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:03.618 11:16:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:03.618 11:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:06.902 11:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.902 11:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:06.902 11:16:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:06.902 11:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.160 MallocForNvmf0 00:06:07.160 11:16:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.160 11:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.160 MallocForNvmf1 00:06:07.418 11:16:50 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.418 11:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.418 [2024-07-15 11:16:50.909132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.418 11:16:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.418 11:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:07.676 11:16:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.676 11:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.934 11:16:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:07.934 11:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:07.934 11:16:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:07.934 11:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.193 [2024-07-15 11:16:51.579215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:08.193 11:16:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:08.193 11:16:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.193 11:16:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.193 11:16:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:08.193 11:16:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.193 11:16:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.193 11:16:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:08.193 11:16:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.193 11:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.452 MallocBdevForConfigChangeCheck 00:06:08.452 11:16:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:08.452 11:16:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.452 11:16:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.452 11:16:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:08.452 11:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.709 11:16:52 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:08.709 INFO: shutting down applications... 00:06:08.709 11:16:52 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:08.709 11:16:52 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:08.709 11:16:52 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:08.709 11:16:52 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:10.605 Calling clear_iscsi_subsystem 00:06:10.605 Calling clear_nvmf_subsystem 00:06:10.605 Calling clear_nbd_subsystem 00:06:10.605 Calling clear_ublk_subsystem 00:06:10.605 Calling clear_vhost_blk_subsystem 00:06:10.605 Calling clear_vhost_scsi_subsystem 00:06:10.605 Calling clear_bdev_subsystem 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.605 11:16:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.605 11:16:54 json_config -- json_config/json_config.sh@345 -- # break 00:06:10.605 11:16:54 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:10.605 11:16:54 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:10.605 11:16:54 json_config -- json_config/common.sh@31 -- # local app=target 00:06:10.605 11:16:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.605 11:16:54 json_config -- json_config/common.sh@35 -- # [[ -n 422993 ]] 00:06:10.605 11:16:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 422993 00:06:10.605 11:16:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.605 11:16:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.605 11:16:54 json_config -- json_config/common.sh@41 -- # kill -0 422993 00:06:10.605 11:16:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.172 11:16:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.172 11:16:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.172 11:16:54 json_config -- json_config/common.sh@41 -- # kill -0 422993 00:06:11.172 11:16:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.172 11:16:54 json_config -- json_config/common.sh@43 -- # break 00:06:11.172 11:16:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.172 11:16:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.172 SPDK target shutdown done 00:06:11.172 11:16:54 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:11.172 INFO: relaunching applications... 00:06:11.172 11:16:54 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.172 11:16:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.172 11:16:54 json_config -- json_config/common.sh@10 -- # shift 00:06:11.172 11:16:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.172 11:16:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.172 11:16:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.172 11:16:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.172 11:16:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.172 11:16:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=424551 00:06:11.172 11:16:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.172 Waiting for target to run... 00:06:11.172 11:16:54 json_config -- json_config/common.sh@25 -- # waitforlisten 424551 /var/tmp/spdk_tgt.sock 00:06:11.172 11:16:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 424551 ']' 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.172 11:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 [2024-07-15 11:16:54.661501] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:11.172 [2024-07-15 11:16:54.661556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424551 ] 00:06:11.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.740 [2024-07-15 11:16:55.106421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.740 [2024-07-15 11:16:55.198201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.024 [2024-07-15 11:16:58.215241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.024 [2024-07-15 11:16:58.247563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:15.282 11:16:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.282 11:16:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:15.282 11:16:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:15.282 00:06:15.282 11:16:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:15.282 11:16:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:15.283 INFO: Checking if target configuration is the same... 00:06:15.283 11:16:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.283 11:16:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:15.283 11:16:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.283 + '[' 2 -ne 2 ']' 00:06:15.283 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.283 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:15.283 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:15.283 +++ basename /dev/fd/62 00:06:15.283 ++ mktemp /tmp/62.XXX 00:06:15.283 + tmp_file_1=/tmp/62.k5g 00:06:15.283 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.283 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.283 + tmp_file_2=/tmp/spdk_tgt_config.json.lLZ 00:06:15.283 + ret=0 00:06:15.283 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.849 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.849 + diff -u /tmp/62.k5g /tmp/spdk_tgt_config.json.lLZ 00:06:15.849 + echo 'INFO: JSON config files are the same' 00:06:15.849 INFO: JSON config files are the same 00:06:15.849 + rm /tmp/62.k5g /tmp/spdk_tgt_config.json.lLZ 00:06:15.849 + exit 0 00:06:15.849 11:16:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:15.849 11:16:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.849 INFO: changing configuration and checking if this can be detected... 00:06:15.849 11:16:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.849 11:16:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.849 11:16:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.849 11:16:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.849 11:16:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.849 + '[' 2 -ne 2 ']' 00:06:15.849 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.849 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:15.849 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:15.849 +++ basename /dev/fd/62 00:06:15.849 ++ mktemp /tmp/62.XXX 00:06:15.849 + tmp_file_1=/tmp/62.P8V 00:06:15.849 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.849 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.849 + tmp_file_2=/tmp/spdk_tgt_config.json.Ls6 00:06:15.849 + ret=0 00:06:15.849 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.376 + diff -u /tmp/62.P8V /tmp/spdk_tgt_config.json.Ls6 00:06:16.376 + ret=1 00:06:16.376 + echo '=== Start of file: /tmp/62.P8V ===' 00:06:16.376 + cat /tmp/62.P8V 00:06:16.376 + echo '=== End of file: /tmp/62.P8V ===' 00:06:16.376 + echo '' 00:06:16.376 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ls6 ===' 00:06:16.376 + cat /tmp/spdk_tgt_config.json.Ls6 00:06:16.376 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ls6 ===' 00:06:16.376 + echo '' 00:06:16.376 + rm /tmp/62.P8V /tmp/spdk_tgt_config.json.Ls6 00:06:16.376 + exit 1 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:16.376 INFO: configuration change detected. 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@317 -- # [[ -n 424551 ]] 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.376 11:16:59 json_config -- json_config/json_config.sh@323 -- # killprocess 424551 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 424551 ']' 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@952 -- # kill -0 424551 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@953 -- # uname 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 424551 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 424551' 00:06:16.376 killing process with pid 424551 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@967 -- # kill 424551 00:06:16.376 11:16:59 json_config -- common/autotest_common.sh@972 -- # wait 424551 00:06:17.779 11:17:01 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.779 11:17:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:17.779 11:17:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.779 11:17:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.779 11:17:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:17.779 11:17:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:17.779 INFO: Success 00:06:17.779 00:06:17.779 real 0m15.246s 00:06:17.779 user 0m15.957s 00:06:17.779 sys 0m2.039s 00:06:17.779 11:17:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.779 11:17:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.779 ************************************ 00:06:17.779 END TEST json_config 00:06:17.779 ************************************ 00:06:18.038 11:17:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.038 11:17:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.038 11:17:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.038 11:17:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.038 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:18.038 ************************************ 00:06:18.038 START TEST json_config_extra_key 00:06:18.038 ************************************ 00:06:18.038 11:17:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.038 11:17:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.038 11:17:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.038 11:17:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.038 11:17:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.038 11:17:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.038 11:17:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.038 11:17:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:18.038 11:17:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.038 11:17:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:18.038 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.039 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.039 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.039 INFO: launching applications... 00:06:18.039 11:17:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=425834 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.039 Waiting for target to run... 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 425834 /var/tmp/spdk_tgt.sock 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 425834 ']' 00:06:18.039 11:17:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.039 11:17:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.039 [2024-07-15 11:17:01.583017] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:18.039 [2024-07-15 11:17:01.583074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425834 ] 00:06:18.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.298 [2024-07-15 11:17:01.879659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.557 [2024-07-15 11:17:01.952268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.816 11:17:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.816 11:17:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.816 00:06:18.816 11:17:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.816 INFO: shutting down applications... 00:06:18.816 11:17:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 425834 ]] 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 425834 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 425834 00:06:18.816 11:17:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 425834 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.384 11:17:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.384 SPDK target shutdown done 00:06:19.384 11:17:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.384 Success 00:06:19.384 00:06:19.384 real 0m1.463s 00:06:19.384 user 0m1.232s 00:06:19.384 sys 0m0.384s 00:06:19.384 11:17:02 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.384 11:17:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.384 ************************************ 00:06:19.384 END TEST json_config_extra_key 00:06:19.384 ************************************ 00:06:19.384 11:17:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.384 11:17:02 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.384 11:17:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.384 11:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.384 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:19.384 ************************************ 00:06:19.384 START TEST alias_rpc 00:06:19.384 ************************************ 00:06:19.384 11:17:02 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.642 * Looking for test storage... 00:06:19.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:19.642 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.642 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=426192 00:06:19.642 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 426192 00:06:19.642 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 426192 ']' 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.642 11:17:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.642 [2024-07-15 11:17:03.101788] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:19.642 [2024-07-15 11:17:03.101834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426192 ] 00:06:19.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.642 [2024-07-15 11:17:03.153265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.642 [2024-07-15 11:17:03.225566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.900 11:17:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.900 11:17:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.900 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:20.158 11:17:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 426192 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 426192 ']' 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 426192 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 426192 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 426192' 00:06:20.158 killing process with pid 426192 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 426192 00:06:20.158 11:17:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 426192 00:06:20.416 00:06:20.416 real 0m1.032s 00:06:20.416 user 0m1.060s 00:06:20.416 sys 0m0.370s 00:06:20.416 11:17:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.416 11:17:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.416 ************************************ 00:06:20.416 END TEST alias_rpc 00:06:20.416 ************************************ 00:06:20.675 11:17:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.675 11:17:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:20.675 11:17:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.675 11:17:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.675 11:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.675 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 ************************************ 00:06:20.675 START TEST spdkcli_tcp 00:06:20.675 ************************************ 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.675 * Looking for test storage... 00:06:20.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=426321 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 426321 00:06:20.675 11:17:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 426321 ']' 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.675 11:17:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 [2024-07-15 11:17:04.208814] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:20.675 [2024-07-15 11:17:04.208862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426321 ] 00:06:20.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.933 [2024-07-15 11:17:04.276457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.933 [2024-07-15 11:17:04.354344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.933 [2024-07-15 11:17:04.354345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.500 11:17:05 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.500 11:17:05 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:21.500 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=426549 00:06:21.500 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.500 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.760 [ 00:06:21.760 "bdev_malloc_delete", 00:06:21.760 "bdev_malloc_create", 00:06:21.760 "bdev_null_resize", 00:06:21.760 "bdev_null_delete", 00:06:21.760 "bdev_null_create", 00:06:21.760 "bdev_nvme_cuse_unregister", 00:06:21.760 "bdev_nvme_cuse_register", 00:06:21.760 "bdev_opal_new_user", 00:06:21.760 "bdev_opal_set_lock_state", 00:06:21.760 "bdev_opal_delete", 00:06:21.760 "bdev_opal_get_info", 00:06:21.760 "bdev_opal_create", 00:06:21.760 "bdev_nvme_opal_revert", 00:06:21.760 "bdev_nvme_opal_init", 00:06:21.760 "bdev_nvme_send_cmd", 00:06:21.760 "bdev_nvme_get_path_iostat", 00:06:21.760 "bdev_nvme_get_mdns_discovery_info", 00:06:21.760 "bdev_nvme_stop_mdns_discovery", 00:06:21.760 "bdev_nvme_start_mdns_discovery", 00:06:21.760 "bdev_nvme_set_multipath_policy", 00:06:21.760 "bdev_nvme_set_preferred_path", 00:06:21.760 "bdev_nvme_get_io_paths", 00:06:21.760 "bdev_nvme_remove_error_injection", 00:06:21.760 "bdev_nvme_add_error_injection", 00:06:21.760 "bdev_nvme_get_discovery_info", 00:06:21.760 "bdev_nvme_stop_discovery", 00:06:21.760 "bdev_nvme_start_discovery", 00:06:21.760 "bdev_nvme_get_controller_health_info", 00:06:21.760 "bdev_nvme_disable_controller", 00:06:21.760 "bdev_nvme_enable_controller", 00:06:21.760 "bdev_nvme_reset_controller", 00:06:21.760 "bdev_nvme_get_transport_statistics", 00:06:21.760 "bdev_nvme_apply_firmware", 00:06:21.760 "bdev_nvme_detach_controller", 00:06:21.760 "bdev_nvme_get_controllers", 00:06:21.760 "bdev_nvme_attach_controller", 00:06:21.760 "bdev_nvme_set_hotplug", 00:06:21.760 "bdev_nvme_set_options", 00:06:21.760 "bdev_passthru_delete", 00:06:21.760 "bdev_passthru_create", 00:06:21.760 "bdev_lvol_set_parent_bdev", 00:06:21.760 "bdev_lvol_set_parent", 00:06:21.760 "bdev_lvol_check_shallow_copy", 00:06:21.760 "bdev_lvol_start_shallow_copy", 00:06:21.760 "bdev_lvol_grow_lvstore", 00:06:21.760 "bdev_lvol_get_lvols", 00:06:21.760 "bdev_lvol_get_lvstores", 00:06:21.760 "bdev_lvol_delete", 00:06:21.760 "bdev_lvol_set_read_only", 00:06:21.760 "bdev_lvol_resize", 00:06:21.760 "bdev_lvol_decouple_parent", 00:06:21.760 "bdev_lvol_inflate", 00:06:21.760 "bdev_lvol_rename", 00:06:21.760 "bdev_lvol_clone_bdev", 00:06:21.760 "bdev_lvol_clone", 00:06:21.760 "bdev_lvol_snapshot", 00:06:21.760 "bdev_lvol_create", 00:06:21.760 "bdev_lvol_delete_lvstore", 00:06:21.760 "bdev_lvol_rename_lvstore", 00:06:21.760 "bdev_lvol_create_lvstore", 00:06:21.760 "bdev_raid_set_options", 00:06:21.760 "bdev_raid_remove_base_bdev", 00:06:21.760 "bdev_raid_add_base_bdev", 00:06:21.760 "bdev_raid_delete", 00:06:21.760 "bdev_raid_create", 00:06:21.760 "bdev_raid_get_bdevs", 00:06:21.760 "bdev_error_inject_error", 00:06:21.760 "bdev_error_delete", 00:06:21.760 "bdev_error_create", 00:06:21.760 "bdev_split_delete", 00:06:21.760 "bdev_split_create", 00:06:21.760 "bdev_delay_delete", 00:06:21.760 "bdev_delay_create", 00:06:21.760 "bdev_delay_update_latency", 00:06:21.760 "bdev_zone_block_delete", 00:06:21.760 "bdev_zone_block_create", 00:06:21.760 "blobfs_create", 00:06:21.760 "blobfs_detect", 00:06:21.760 "blobfs_set_cache_size", 00:06:21.760 "bdev_aio_delete", 00:06:21.760 "bdev_aio_rescan", 00:06:21.760 "bdev_aio_create", 00:06:21.760 "bdev_ftl_set_property", 00:06:21.760 "bdev_ftl_get_properties", 00:06:21.760 "bdev_ftl_get_stats", 00:06:21.760 "bdev_ftl_unmap", 00:06:21.760 "bdev_ftl_unload", 00:06:21.760 "bdev_ftl_delete", 00:06:21.760 "bdev_ftl_load", 00:06:21.760 "bdev_ftl_create", 00:06:21.760 "bdev_virtio_attach_controller", 00:06:21.760 "bdev_virtio_scsi_get_devices", 00:06:21.760 "bdev_virtio_detach_controller", 00:06:21.760 "bdev_virtio_blk_set_hotplug", 00:06:21.760 "bdev_iscsi_delete", 00:06:21.760 "bdev_iscsi_create", 00:06:21.760 "bdev_iscsi_set_options", 00:06:21.760 "accel_error_inject_error", 00:06:21.760 "ioat_scan_accel_module", 00:06:21.760 "dsa_scan_accel_module", 00:06:21.760 "iaa_scan_accel_module", 00:06:21.760 "vfu_virtio_create_scsi_endpoint", 00:06:21.760 "vfu_virtio_scsi_remove_target", 00:06:21.760 "vfu_virtio_scsi_add_target", 00:06:21.760 "vfu_virtio_create_blk_endpoint", 00:06:21.760 "vfu_virtio_delete_endpoint", 00:06:21.760 "keyring_file_remove_key", 00:06:21.760 "keyring_file_add_key", 00:06:21.760 "keyring_linux_set_options", 00:06:21.760 "iscsi_get_histogram", 00:06:21.760 "iscsi_enable_histogram", 00:06:21.760 "iscsi_set_options", 00:06:21.760 "iscsi_get_auth_groups", 00:06:21.760 "iscsi_auth_group_remove_secret", 00:06:21.760 "iscsi_auth_group_add_secret", 00:06:21.760 "iscsi_delete_auth_group", 00:06:21.760 "iscsi_create_auth_group", 00:06:21.760 "iscsi_set_discovery_auth", 00:06:21.760 "iscsi_get_options", 00:06:21.760 "iscsi_target_node_request_logout", 00:06:21.760 "iscsi_target_node_set_redirect", 00:06:21.760 "iscsi_target_node_set_auth", 00:06:21.760 "iscsi_target_node_add_lun", 00:06:21.760 "iscsi_get_stats", 00:06:21.760 "iscsi_get_connections", 00:06:21.760 "iscsi_portal_group_set_auth", 00:06:21.760 "iscsi_start_portal_group", 00:06:21.760 "iscsi_delete_portal_group", 00:06:21.760 "iscsi_create_portal_group", 00:06:21.760 "iscsi_get_portal_groups", 00:06:21.760 "iscsi_delete_target_node", 00:06:21.760 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.760 "iscsi_target_node_add_pg_ig_maps", 00:06:21.760 "iscsi_create_target_node", 00:06:21.760 "iscsi_get_target_nodes", 00:06:21.760 "iscsi_delete_initiator_group", 00:06:21.760 "iscsi_initiator_group_remove_initiators", 00:06:21.760 "iscsi_initiator_group_add_initiators", 00:06:21.760 "iscsi_create_initiator_group", 00:06:21.760 "iscsi_get_initiator_groups", 00:06:21.760 "nvmf_set_crdt", 00:06:21.760 "nvmf_set_config", 00:06:21.760 "nvmf_set_max_subsystems", 00:06:21.760 "nvmf_stop_mdns_prr", 00:06:21.760 "nvmf_publish_mdns_prr", 00:06:21.760 "nvmf_subsystem_get_listeners", 00:06:21.760 "nvmf_subsystem_get_qpairs", 00:06:21.760 "nvmf_subsystem_get_controllers", 00:06:21.760 "nvmf_get_stats", 00:06:21.760 "nvmf_get_transports", 00:06:21.760 "nvmf_create_transport", 00:06:21.760 "nvmf_get_targets", 00:06:21.760 "nvmf_delete_target", 00:06:21.760 "nvmf_create_target", 00:06:21.760 "nvmf_subsystem_allow_any_host", 00:06:21.760 "nvmf_subsystem_remove_host", 00:06:21.760 "nvmf_subsystem_add_host", 00:06:21.760 "nvmf_ns_remove_host", 00:06:21.760 "nvmf_ns_add_host", 00:06:21.760 "nvmf_subsystem_remove_ns", 00:06:21.760 "nvmf_subsystem_add_ns", 00:06:21.760 "nvmf_subsystem_listener_set_ana_state", 00:06:21.760 "nvmf_discovery_get_referrals", 00:06:21.760 "nvmf_discovery_remove_referral", 00:06:21.760 "nvmf_discovery_add_referral", 00:06:21.760 "nvmf_subsystem_remove_listener", 00:06:21.760 "nvmf_subsystem_add_listener", 00:06:21.760 "nvmf_delete_subsystem", 00:06:21.760 "nvmf_create_subsystem", 00:06:21.760 "nvmf_get_subsystems", 00:06:21.760 "env_dpdk_get_mem_stats", 00:06:21.760 "nbd_get_disks", 00:06:21.760 "nbd_stop_disk", 00:06:21.760 "nbd_start_disk", 00:06:21.760 "ublk_recover_disk", 00:06:21.760 "ublk_get_disks", 00:06:21.760 "ublk_stop_disk", 00:06:21.760 "ublk_start_disk", 00:06:21.760 "ublk_destroy_target", 00:06:21.760 "ublk_create_target", 00:06:21.760 "virtio_blk_create_transport", 00:06:21.760 "virtio_blk_get_transports", 00:06:21.761 "vhost_controller_set_coalescing", 00:06:21.761 "vhost_get_controllers", 00:06:21.761 "vhost_delete_controller", 00:06:21.761 "vhost_create_blk_controller", 00:06:21.761 "vhost_scsi_controller_remove_target", 00:06:21.761 "vhost_scsi_controller_add_target", 00:06:21.761 "vhost_start_scsi_controller", 00:06:21.761 "vhost_create_scsi_controller", 00:06:21.761 "thread_set_cpumask", 00:06:21.761 "framework_get_governor", 00:06:21.761 "framework_get_scheduler", 00:06:21.761 "framework_set_scheduler", 00:06:21.761 "framework_get_reactors", 00:06:21.761 "thread_get_io_channels", 00:06:21.761 "thread_get_pollers", 00:06:21.761 "thread_get_stats", 00:06:21.761 "framework_monitor_context_switch", 00:06:21.761 "spdk_kill_instance", 00:06:21.761 "log_enable_timestamps", 00:06:21.761 "log_get_flags", 00:06:21.761 "log_clear_flag", 00:06:21.761 "log_set_flag", 00:06:21.761 "log_get_level", 00:06:21.761 "log_set_level", 00:06:21.761 "log_get_print_level", 00:06:21.761 "log_set_print_level", 00:06:21.761 "framework_enable_cpumask_locks", 00:06:21.761 "framework_disable_cpumask_locks", 00:06:21.761 "framework_wait_init", 00:06:21.761 "framework_start_init", 00:06:21.761 "scsi_get_devices", 00:06:21.761 "bdev_get_histogram", 00:06:21.761 "bdev_enable_histogram", 00:06:21.761 "bdev_set_qos_limit", 00:06:21.761 "bdev_set_qd_sampling_period", 00:06:21.761 "bdev_get_bdevs", 00:06:21.761 "bdev_reset_iostat", 00:06:21.761 "bdev_get_iostat", 00:06:21.761 "bdev_examine", 00:06:21.761 "bdev_wait_for_examine", 00:06:21.761 "bdev_set_options", 00:06:21.761 "notify_get_notifications", 00:06:21.761 "notify_get_types", 00:06:21.761 "accel_get_stats", 00:06:21.761 "accel_set_options", 00:06:21.761 "accel_set_driver", 00:06:21.761 "accel_crypto_key_destroy", 00:06:21.761 "accel_crypto_keys_get", 00:06:21.761 "accel_crypto_key_create", 00:06:21.761 "accel_assign_opc", 00:06:21.761 "accel_get_module_info", 00:06:21.761 "accel_get_opc_assignments", 00:06:21.761 "vmd_rescan", 00:06:21.761 "vmd_remove_device", 00:06:21.761 "vmd_enable", 00:06:21.761 "sock_get_default_impl", 00:06:21.761 "sock_set_default_impl", 00:06:21.761 "sock_impl_set_options", 00:06:21.761 "sock_impl_get_options", 00:06:21.761 "iobuf_get_stats", 00:06:21.761 "iobuf_set_options", 00:06:21.761 "keyring_get_keys", 00:06:21.761 "framework_get_pci_devices", 00:06:21.761 "framework_get_config", 00:06:21.761 "framework_get_subsystems", 00:06:21.761 "vfu_tgt_set_base_path", 00:06:21.761 "trace_get_info", 00:06:21.761 "trace_get_tpoint_group_mask", 00:06:21.761 "trace_disable_tpoint_group", 00:06:21.761 "trace_enable_tpoint_group", 00:06:21.761 "trace_clear_tpoint_mask", 00:06:21.761 "trace_set_tpoint_mask", 00:06:21.761 "spdk_get_version", 00:06:21.761 "rpc_get_methods" 00:06:21.761 ] 00:06:21.761 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.761 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.761 11:17:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 426321 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 426321 ']' 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 426321 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 426321 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 426321' 00:06:21.761 killing process with pid 426321 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 426321 00:06:21.761 11:17:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 426321 00:06:22.019 00:06:22.019 real 0m1.522s 00:06:22.019 user 0m2.799s 00:06:22.019 sys 0m0.468s 00:06:22.019 11:17:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.019 11:17:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.019 ************************************ 00:06:22.019 END TEST spdkcli_tcp 00:06:22.019 ************************************ 00:06:22.278 11:17:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.278 11:17:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.278 11:17:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.278 11:17:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.278 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:22.278 ************************************ 00:06:22.278 START TEST dpdk_mem_utility 00:06:22.278 ************************************ 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.278 * Looking for test storage... 00:06:22.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:22.278 11:17:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.278 11:17:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=426735 00:06:22.278 11:17:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 426735 00:06:22.278 11:17:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 426735 ']' 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.278 11:17:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.278 [2024-07-15 11:17:05.796993] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:22.278 [2024-07-15 11:17:05.797042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426735 ] 00:06:22.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.278 [2024-07-15 11:17:05.864691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.537 [2024-07-15 11:17:05.944372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.104 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.104 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:23.104 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.104 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.104 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.104 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.104 { 00:06:23.104 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.104 } 00:06:23.104 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.104 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.104 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:23.104 1 heaps totaling size 814.000000 MiB 00:06:23.104 size: 814.000000 MiB heap id: 0 00:06:23.104 end heaps---------- 00:06:23.104 8 mempools totaling size 598.116089 MiB 00:06:23.104 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.104 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.104 size: 84.521057 MiB name: bdev_io_426735 00:06:23.104 size: 51.011292 MiB name: evtpool_426735 00:06:23.104 size: 50.003479 MiB name: msgpool_426735 00:06:23.104 size: 21.763794 MiB name: PDU_Pool 00:06:23.104 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.104 size: 0.026123 MiB name: Session_Pool 00:06:23.104 end mempools------- 00:06:23.104 6 memzones totaling size 4.142822 MiB 00:06:23.104 size: 1.000366 MiB name: RG_ring_0_426735 00:06:23.104 size: 1.000366 MiB name: RG_ring_1_426735 00:06:23.104 size: 1.000366 MiB name: RG_ring_4_426735 00:06:23.104 size: 1.000366 MiB name: RG_ring_5_426735 00:06:23.104 size: 0.125366 MiB name: RG_ring_2_426735 00:06:23.104 size: 0.015991 MiB name: RG_ring_3_426735 00:06:23.104 end memzones------- 00:06:23.104 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.363 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:23.363 list of free elements. size: 12.519348 MiB 00:06:23.363 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:23.363 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:23.363 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:23.363 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:23.363 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:23.363 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:23.363 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:23.363 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:23.363 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:23.363 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:23.363 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:23.363 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:23.363 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:23.363 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:23.363 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:23.363 list of standard malloc elements. size: 199.218079 MiB 00:06:23.363 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:23.363 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:23.363 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:23.363 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:23.363 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:23.363 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.363 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:23.363 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.363 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:23.363 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:23.363 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:23.363 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:23.363 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:23.363 list of memzone associated elements. size: 602.262573 MiB 00:06:23.363 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:23.363 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.363 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:23.363 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.363 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:23.363 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_426735_0 00:06:23.363 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:23.363 associated memzone info: size: 48.002930 MiB name: MP_evtpool_426735_0 00:06:23.363 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:23.363 associated memzone info: size: 48.002930 MiB name: MP_msgpool_426735_0 00:06:23.363 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:23.363 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.363 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:23.363 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.363 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:23.363 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_426735 00:06:23.363 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:23.363 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_426735 00:06:23.363 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:23.363 associated memzone info: size: 1.007996 MiB name: MP_evtpool_426735 00:06:23.363 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:23.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.363 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:23.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.363 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:23.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.363 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:23.363 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.363 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:23.363 associated memzone info: size: 1.000366 MiB name: RG_ring_0_426735 00:06:23.363 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:23.363 associated memzone info: size: 1.000366 MiB name: RG_ring_1_426735 00:06:23.363 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:23.363 associated memzone info: size: 1.000366 MiB name: RG_ring_4_426735 00:06:23.363 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:23.363 associated memzone info: size: 1.000366 MiB name: RG_ring_5_426735 00:06:23.363 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:23.363 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_426735 00:06:23.363 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:23.363 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.363 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:23.363 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.363 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:23.363 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.363 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:23.363 associated memzone info: size: 0.125366 MiB name: RG_ring_2_426735 00:06:23.363 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:23.363 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.363 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:23.363 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.363 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:23.363 associated memzone info: size: 0.015991 MiB name: RG_ring_3_426735 00:06:23.363 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:23.363 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.363 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:23.363 associated memzone info: size: 0.000183 MiB name: MP_msgpool_426735 00:06:23.363 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:23.363 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_426735 00:06:23.363 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:23.363 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.363 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.363 11:17:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 426735 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 426735 ']' 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 426735 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 426735 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 426735' 00:06:23.363 killing process with pid 426735 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 426735 00:06:23.363 11:17:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 426735 00:06:23.623 00:06:23.623 real 0m1.405s 00:06:23.623 user 0m1.480s 00:06:23.623 sys 0m0.405s 00:06:23.623 11:17:07 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.623 11:17:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.623 ************************************ 00:06:23.623 END TEST dpdk_mem_utility 00:06:23.623 ************************************ 00:06:23.623 11:17:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.623 11:17:07 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:23.623 11:17:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.623 11:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.623 11:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.623 ************************************ 00:06:23.623 START TEST event 00:06:23.623 ************************************ 00:06:23.623 11:17:07 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:23.623 * Looking for test storage... 00:06:23.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:23.882 11:17:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:23.882 11:17:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.882 11:17:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.882 11:17:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.882 11:17:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.882 11:17:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.882 ************************************ 00:06:23.882 START TEST event_perf 00:06:23.882 ************************************ 00:06:23.882 11:17:07 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.882 Running I/O for 1 seconds...[2024-07-15 11:17:07.268830] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:23.882 [2024-07-15 11:17:07.268891] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427103 ] 00:06:23.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.882 [2024-07-15 11:17:07.340168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.882 [2024-07-15 11:17:07.416125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.882 [2024-07-15 11:17:07.416249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.882 [2024-07-15 11:17:07.416320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.882 [2024-07-15 11:17:07.416321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.259 Running I/O for 1 seconds... 00:06:25.259 lcore 0: 207618 00:06:25.259 lcore 1: 207617 00:06:25.259 lcore 2: 207618 00:06:25.259 lcore 3: 207619 00:06:25.259 done. 00:06:25.259 00:06:25.259 real 0m1.236s 00:06:25.259 user 0m4.152s 00:06:25.259 sys 0m0.082s 00:06:25.259 11:17:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.259 11:17:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.259 ************************************ 00:06:25.259 END TEST event_perf 00:06:25.259 ************************************ 00:06:25.259 11:17:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.259 11:17:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.259 11:17:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.259 11:17:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.259 11:17:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.259 ************************************ 00:06:25.259 START TEST event_reactor 00:06:25.259 ************************************ 00:06:25.259 11:17:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.259 [2024-07-15 11:17:08.577530] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:25.259 [2024-07-15 11:17:08.577601] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427310 ] 00:06:25.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.259 [2024-07-15 11:17:08.646892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.259 [2024-07-15 11:17:08.722618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.634 test_start 00:06:26.634 oneshot 00:06:26.634 tick 100 00:06:26.634 tick 100 00:06:26.634 tick 250 00:06:26.634 tick 100 00:06:26.634 tick 100 00:06:26.634 tick 100 00:06:26.634 tick 250 00:06:26.634 tick 500 00:06:26.634 tick 100 00:06:26.634 tick 100 00:06:26.634 tick 250 00:06:26.634 tick 100 00:06:26.634 tick 100 00:06:26.634 test_end 00:06:26.634 00:06:26.634 real 0m1.234s 00:06:26.634 user 0m1.145s 00:06:26.634 sys 0m0.086s 00:06:26.634 11:17:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.634 11:17:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:26.634 ************************************ 00:06:26.634 END TEST event_reactor 00:06:26.634 ************************************ 00:06:26.634 11:17:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.634 11:17:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.634 11:17:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:26.634 11:17:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.634 11:17:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.634 ************************************ 00:06:26.634 START TEST event_reactor_perf 00:06:26.634 ************************************ 00:06:26.634 11:17:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.634 [2024-07-15 11:17:09.880371] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:26.634 [2024-07-15 11:17:09.880441] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427513 ] 00:06:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.634 [2024-07-15 11:17:09.950703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.634 [2024-07-15 11:17:10.027286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.607 test_start 00:06:27.607 test_end 00:06:27.607 Performance: 506106 events per second 00:06:27.607 00:06:27.607 real 0m1.238s 00:06:27.607 user 0m1.147s 00:06:27.607 sys 0m0.087s 00:06:27.607 11:17:11 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.607 11:17:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.607 ************************************ 00:06:27.607 END TEST event_reactor_perf 00:06:27.607 ************************************ 00:06:27.607 11:17:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.607 11:17:11 event -- event/event.sh@49 -- # uname -s 00:06:27.607 11:17:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.607 11:17:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.607 11:17:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.607 11:17:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.607 11:17:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.607 ************************************ 00:06:27.607 START TEST event_scheduler 00:06:27.607 ************************************ 00:06:27.607 11:17:11 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.865 * Looking for test storage... 00:06:27.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:27.865 11:17:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.865 11:17:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=427807 00:06:27.865 11:17:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.865 11:17:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.865 11:17:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 427807 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 427807 ']' 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.865 11:17:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 [2024-07-15 11:17:11.311191] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:27.865 [2024-07-15 11:17:11.311262] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427807 ] 00:06:27.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.865 [2024-07-15 11:17:11.378660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.123 [2024-07-15 11:17:11.461825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.123 [2024-07-15 11:17:11.461853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.123 [2024-07-15 11:17:11.461961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.123 [2024-07-15 11:17:11.461962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:28.687 11:17:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 [2024-07-15 11:17:12.136419] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:28.687 [2024-07-15 11:17:12.136436] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:28.687 [2024-07-15 11:17:12.136444] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.687 [2024-07-15 11:17:12.136450] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.687 [2024-07-15 11:17:12.136455] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.687 11:17:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 [2024-07-15 11:17:12.208496] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.687 11:17:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 ************************************ 00:06:28.687 START TEST scheduler_create_thread 00:06:28.687 ************************************ 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 2 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 3 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.687 4 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.687 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 5 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 6 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 7 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 8 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 9 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 10 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.946 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.512 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.512 11:17:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:29.512 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.512 11:17:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.886 11:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.886 11:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:30.886 11:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:30.886 11:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.886 11:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.821 11:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.821 00:06:31.821 real 0m3.102s 00:06:31.821 user 0m0.024s 00:06:31.821 sys 0m0.004s 00:06:31.821 11:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.821 11:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.821 ************************************ 00:06:31.821 END TEST scheduler_create_thread 00:06:31.821 ************************************ 00:06:31.821 11:17:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:31.821 11:17:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:31.822 11:17:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 427807 00:06:31.822 11:17:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 427807 ']' 00:06:31.822 11:17:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 427807 00:06:31.822 11:17:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:31.822 11:17:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.822 11:17:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 427807 00:06:32.080 11:17:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:32.080 11:17:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:32.080 11:17:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 427807' 00:06:32.080 killing process with pid 427807 00:06:32.080 11:17:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 427807 00:06:32.080 11:17:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 427807 00:06:32.338 [2024-07-15 11:17:15.723666] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:32.597 00:06:32.597 real 0m4.760s 00:06:32.597 user 0m9.241s 00:06:32.597 sys 0m0.399s 00:06:32.597 11:17:15 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.597 11:17:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.597 ************************************ 00:06:32.597 END TEST event_scheduler 00:06:32.597 ************************************ 00:06:32.597 11:17:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.597 11:17:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:32.597 11:17:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:32.597 11:17:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.597 11:17:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.597 11:17:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.597 ************************************ 00:06:32.597 START TEST app_repeat 00:06:32.597 ************************************ 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=428652 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 428652' 00:06:32.597 Process app_repeat pid: 428652 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:32.597 spdk_app_start Round 0 00:06:32.597 11:17:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 428652 /var/tmp/spdk-nbd.sock 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 428652 ']' 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.597 11:17:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.597 [2024-07-15 11:17:16.043904] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:32.597 [2024-07-15 11:17:16.043958] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428652 ] 00:06:32.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.597 [2024-07-15 11:17:16.112176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.854 [2024-07-15 11:17:16.192877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.854 [2024-07-15 11:17:16.192877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.421 11:17:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.421 11:17:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:33.421 11:17:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.679 Malloc0 00:06:33.679 11:17:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.679 Malloc1 00:06:33.679 11:17:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.679 11:17:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.937 /dev/nbd0 00:06:33.937 11:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.937 11:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.937 1+0 records in 00:06:33.937 1+0 records out 00:06:33.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228222 s, 17.9 MB/s 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.937 11:17:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:33.937 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.937 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.937 11:17:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.212 /dev/nbd1 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.212 1+0 records in 00:06:34.212 1+0 records out 00:06:34.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200593 s, 20.4 MB/s 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.212 11:17:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.212 11:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.474 { 00:06:34.474 "nbd_device": "/dev/nbd0", 00:06:34.474 "bdev_name": "Malloc0" 00:06:34.474 }, 00:06:34.474 { 00:06:34.474 "nbd_device": "/dev/nbd1", 00:06:34.474 "bdev_name": "Malloc1" 00:06:34.474 } 00:06:34.474 ]' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.474 { 00:06:34.474 "nbd_device": "/dev/nbd0", 00:06:34.474 "bdev_name": "Malloc0" 00:06:34.474 }, 00:06:34.474 { 00:06:34.474 "nbd_device": "/dev/nbd1", 00:06:34.474 "bdev_name": "Malloc1" 00:06:34.474 } 00:06:34.474 ]' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.474 /dev/nbd1' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.474 /dev/nbd1' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.474 256+0 records in 00:06:34.474 256+0 records out 00:06:34.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104134 s, 101 MB/s 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.474 256+0 records in 00:06:34.474 256+0 records out 00:06:34.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136343 s, 76.9 MB/s 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.474 256+0 records in 00:06:34.474 256+0 records out 00:06:34.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148169 s, 70.8 MB/s 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.474 11:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.475 11:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.475 11:17:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.475 11:17:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.475 11:17:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.732 11:17:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.991 11:17:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.991 11:17:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.249 11:17:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.507 [2024-07-15 11:17:18.945436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.507 [2024-07-15 11:17:19.012148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.507 [2024-07-15 11:17:19.012161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.507 [2024-07-15 11:17:19.053113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.507 [2024-07-15 11:17:19.053151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.791 11:17:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.791 11:17:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.791 spdk_app_start Round 1 00:06:38.791 11:17:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 428652 /var/tmp/spdk-nbd.sock 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 428652 ']' 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.791 11:17:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.791 11:17:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.791 Malloc0 00:06:38.791 11:17:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.791 Malloc1 00:06:38.791 11:17:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.791 11:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.049 /dev/nbd0 00:06:39.049 11:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.049 11:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.049 1+0 records in 00:06:39.049 1+0 records out 00:06:39.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019077 s, 21.5 MB/s 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.049 11:17:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.049 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.049 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.049 11:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.308 /dev/nbd1 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.308 1+0 records in 00:06:39.308 1+0 records out 00:06:39.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231109 s, 17.7 MB/s 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.308 11:17:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.308 11:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.567 { 00:06:39.567 "nbd_device": "/dev/nbd0", 00:06:39.567 "bdev_name": "Malloc0" 00:06:39.567 }, 00:06:39.567 { 00:06:39.567 "nbd_device": "/dev/nbd1", 00:06:39.567 "bdev_name": "Malloc1" 00:06:39.567 } 00:06:39.567 ]' 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.567 { 00:06:39.567 "nbd_device": "/dev/nbd0", 00:06:39.567 "bdev_name": "Malloc0" 00:06:39.567 }, 00:06:39.567 { 00:06:39.567 "nbd_device": "/dev/nbd1", 00:06:39.567 "bdev_name": "Malloc1" 00:06:39.567 } 00:06:39.567 ]' 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.567 /dev/nbd1' 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.567 /dev/nbd1' 00:06:39.567 11:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.567 256+0 records in 00:06:39.567 256+0 records out 00:06:39.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103369 s, 101 MB/s 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.567 256+0 records in 00:06:39.567 256+0 records out 00:06:39.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146525 s, 71.6 MB/s 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.567 256+0 records in 00:06:39.567 256+0 records out 00:06:39.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147068 s, 71.3 MB/s 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.567 11:17:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.828 11:17:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.087 11:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.345 11:17:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.345 11:17:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.345 11:17:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.603 [2024-07-15 11:17:24.077091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.603 [2024-07-15 11:17:24.144089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.603 [2024-07-15 11:17:24.144089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.603 [2024-07-15 11:17:24.185688] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.603 [2024-07-15 11:17:24.185728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.886 11:17:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.886 11:17:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:43.886 spdk_app_start Round 2 00:06:43.886 11:17:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 428652 /var/tmp/spdk-nbd.sock 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 428652 ']' 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.886 11:17:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 11:17:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.886 11:17:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.886 11:17:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.886 Malloc0 00:06:43.886 11:17:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.886 Malloc1 00:06:43.886 11:17:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.886 11:17:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.144 /dev/nbd0 00:06:44.144 11:17:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.144 11:17:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.144 1+0 records in 00:06:44.144 1+0 records out 00:06:44.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221234 s, 18.5 MB/s 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.144 11:17:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.144 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.144 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.144 11:17:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.403 /dev/nbd1 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.403 1+0 records in 00:06:44.403 1+0 records out 00:06:44.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198378 s, 20.6 MB/s 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.403 11:17:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.403 11:17:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.661 { 00:06:44.661 "nbd_device": "/dev/nbd0", 00:06:44.661 "bdev_name": "Malloc0" 00:06:44.661 }, 00:06:44.661 { 00:06:44.661 "nbd_device": "/dev/nbd1", 00:06:44.661 "bdev_name": "Malloc1" 00:06:44.661 } 00:06:44.661 ]' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.661 { 00:06:44.661 "nbd_device": "/dev/nbd0", 00:06:44.661 "bdev_name": "Malloc0" 00:06:44.661 }, 00:06:44.661 { 00:06:44.661 "nbd_device": "/dev/nbd1", 00:06:44.661 "bdev_name": "Malloc1" 00:06:44.661 } 00:06:44.661 ]' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.661 /dev/nbd1' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.661 /dev/nbd1' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.661 256+0 records in 00:06:44.661 256+0 records out 00:06:44.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010516 s, 99.7 MB/s 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.661 256+0 records in 00:06:44.661 256+0 records out 00:06:44.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146009 s, 71.8 MB/s 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.661 256+0 records in 00:06:44.661 256+0 records out 00:06:44.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150848 s, 69.5 MB/s 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.661 11:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.662 11:17:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.920 11:17:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.179 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.180 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.438 11:17:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.438 11:17:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.438 11:17:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.697 [2024-07-15 11:17:29.165667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.697 [2024-07-15 11:17:29.232970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.697 [2024-07-15 11:17:29.232971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.697 [2024-07-15 11:17:29.273970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.697 [2024-07-15 11:17:29.274009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.983 11:17:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 428652 /var/tmp/spdk-nbd.sock 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 428652 ']' 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.983 11:17:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.983 11:17:32 event.app_repeat -- event/event.sh@39 -- # killprocess 428652 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 428652 ']' 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 428652 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 428652 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 428652' 00:06:48.983 killing process with pid 428652 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@967 -- # kill 428652 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@972 -- # wait 428652 00:06:48.983 spdk_app_start is called in Round 0. 00:06:48.983 Shutdown signal received, stop current app iteration 00:06:48.983 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:06:48.983 spdk_app_start is called in Round 1. 00:06:48.983 Shutdown signal received, stop current app iteration 00:06:48.983 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:06:48.983 spdk_app_start is called in Round 2. 00:06:48.983 Shutdown signal received, stop current app iteration 00:06:48.983 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:06:48.983 spdk_app_start is called in Round 3. 00:06:48.983 Shutdown signal received, stop current app iteration 00:06:48.983 11:17:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:48.983 11:17:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:48.983 00:06:48.983 real 0m16.378s 00:06:48.983 user 0m35.530s 00:06:48.983 sys 0m2.367s 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.983 11:17:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.983 ************************************ 00:06:48.983 END TEST app_repeat 00:06:48.983 ************************************ 00:06:48.983 11:17:32 event -- common/autotest_common.sh@1142 -- # return 0 00:06:48.983 11:17:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:48.983 11:17:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:48.983 11:17:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.983 11:17:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.983 11:17:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.983 ************************************ 00:06:48.983 START TEST cpu_locks 00:06:48.983 ************************************ 00:06:48.983 11:17:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:48.983 * Looking for test storage... 00:06:48.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:48.983 11:17:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.983 11:17:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.983 11:17:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.983 11:17:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.983 11:17:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.983 11:17:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.983 11:17:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.242 ************************************ 00:06:49.242 START TEST default_locks 00:06:49.242 ************************************ 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=431645 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 431645 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 431645 ']' 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.242 11:17:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.242 [2024-07-15 11:17:32.633356] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:49.242 [2024-07-15 11:17:32.633414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431645 ] 00:06:49.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.242 [2024-07-15 11:17:32.698876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.242 [2024-07-15 11:17:32.771452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.176 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.176 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:50.176 11:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 431645 00:06:50.176 11:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 431645 00:06:50.176 11:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.435 lslocks: write error 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 431645 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 431645 ']' 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 431645 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 431645 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 431645' 00:06:50.435 killing process with pid 431645 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 431645 00:06:50.435 11:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 431645 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 431645 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 431645 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 431645 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 431645 ']' 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (431645) - No such process 00:06:50.695 ERROR: process (pid: 431645) is no longer running 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:50.695 00:06:50.695 real 0m1.579s 00:06:50.695 user 0m1.680s 00:06:50.695 sys 0m0.508s 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.695 11:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.695 ************************************ 00:06:50.695 END TEST default_locks 00:06:50.695 ************************************ 00:06:50.695 11:17:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:50.695 11:17:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:50.695 11:17:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.695 11:17:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.695 11:17:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.695 ************************************ 00:06:50.695 START TEST default_locks_via_rpc 00:06:50.695 ************************************ 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=431908 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 431908 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 431908 ']' 00:06:50.695 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.696 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.696 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.696 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.696 11:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 [2024-07-15 11:17:34.264654] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:50.696 [2024-07-15 11:17:34.264690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431908 ] 00:06:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.954 [2024-07-15 11:17:34.334037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.954 [2024-07-15 11:17:34.407108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.521 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.522 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.522 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 431908 00:06:51.522 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 431908 00:06:51.522 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 431908 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 431908 ']' 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 431908 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 431908 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 431908' 00:06:52.088 killing process with pid 431908 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 431908 00:06:52.088 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 431908 00:06:52.347 00:06:52.347 real 0m1.583s 00:06:52.347 user 0m1.664s 00:06:52.347 sys 0m0.507s 00:06:52.347 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.347 11:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 END TEST default_locks_via_rpc 00:06:52.347 ************************************ 00:06:52.347 11:17:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.347 11:17:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:52.347 11:17:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.347 11:17:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.347 11:17:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 START TEST non_locking_app_on_locked_coremask 00:06:52.347 ************************************ 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=432185 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 432185 /var/tmp/spdk.sock 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 432185 ']' 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.347 11:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 [2024-07-15 11:17:35.925508] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:52.347 [2024-07-15 11:17:35.925555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432185 ] 00:06:52.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.606 [2024-07-15 11:17:35.992484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.606 [2024-07-15 11:17:36.070626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=432401 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 432401 /var/tmp/spdk2.sock 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 432401 ']' 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.174 11:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.434 [2024-07-15 11:17:36.768477] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:53.434 [2024-07-15 11:17:36.768524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432401 ] 00:06:53.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.434 [2024-07-15 11:17:36.842500] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.434 [2024-07-15 11:17:36.842531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.434 [2024-07-15 11:17:36.987681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.003 11:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.003 11:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.003 11:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 432185 00:06:54.003 11:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.003 11:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 432185 00:06:54.570 lslocks: write error 00:06:54.570 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 432185 00:06:54.570 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 432185 ']' 00:06:54.570 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 432185 00:06:54.570 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432185 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432185' 00:06:54.829 killing process with pid 432185 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 432185 00:06:54.829 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 432185 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 432401 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 432401 ']' 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 432401 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432401 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432401' 00:06:55.427 killing process with pid 432401 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 432401 00:06:55.427 11:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 432401 00:06:55.687 00:06:55.687 real 0m3.292s 00:06:55.687 user 0m3.495s 00:06:55.687 sys 0m0.968s 00:06:55.687 11:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.687 11:17:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.687 ************************************ 00:06:55.687 END TEST non_locking_app_on_locked_coremask 00:06:55.687 ************************************ 00:06:55.687 11:17:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.687 11:17:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:55.687 11:17:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.687 11:17:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.687 11:17:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.687 ************************************ 00:06:55.687 START TEST locking_app_on_unlocked_coremask 00:06:55.687 ************************************ 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=432894 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 432894 /var/tmp/spdk.sock 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 432894 ']' 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.687 11:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.947 [2024-07-15 11:17:39.286977] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:55.947 [2024-07-15 11:17:39.287019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432894 ] 00:06:55.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.947 [2024-07-15 11:17:39.354661] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.947 [2024-07-15 11:17:39.354690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.947 [2024-07-15 11:17:39.424170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=432971 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 432971 /var/tmp/spdk2.sock 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 432971 ']' 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.514 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.772 [2024-07-15 11:17:40.138927] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:56.772 [2024-07-15 11:17:40.138979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432971 ] 00:06:56.772 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.772 [2024-07-15 11:17:40.215707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.772 [2024-07-15 11:17:40.361754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.711 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.711 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.711 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 432971 00:06:57.711 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 432971 00:06:57.711 11:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.711 lslocks: write error 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 432894 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 432894 ']' 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 432894 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432894 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432894' 00:06:57.711 killing process with pid 432894 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 432894 00:06:57.711 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 432894 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 432971 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 432971 ']' 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 432971 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.280 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432971 00:06:58.539 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.539 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.539 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432971' 00:06:58.539 killing process with pid 432971 00:06:58.539 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 432971 00:06:58.539 11:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 432971 00:06:58.798 00:06:58.798 real 0m2.981s 00:06:58.798 user 0m3.218s 00:06:58.798 sys 0m0.805s 00:06:58.798 11:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.798 11:17:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.799 ************************************ 00:06:58.799 END TEST locking_app_on_unlocked_coremask 00:06:58.799 ************************************ 00:06:58.799 11:17:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.799 11:17:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.799 11:17:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.799 11:17:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.799 11:17:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.799 ************************************ 00:06:58.799 START TEST locking_app_on_locked_coremask 00:06:58.799 ************************************ 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=433397 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 433397 /var/tmp/spdk.sock 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 433397 ']' 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.799 11:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.799 [2024-07-15 11:17:42.336062] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:58.799 [2024-07-15 11:17:42.336102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433397 ] 00:06:58.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.058 [2024-07-15 11:17:42.400243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.058 [2024-07-15 11:17:42.479860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=433625 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 433625 /var/tmp/spdk2.sock 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 433625 /var/tmp/spdk2.sock 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 433625 /var/tmp/spdk2.sock 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 433625 ']' 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.626 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.626 [2024-07-15 11:17:43.164823] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:59.626 [2024-07-15 11:17:43.164868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433625 ] 00:06:59.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.884 [2024-07-15 11:17:43.234157] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 433397 has claimed it. 00:06:59.884 [2024-07-15 11:17:43.234187] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (433625) - No such process 00:07:00.449 ERROR: process (pid: 433625) is no longer running 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 433397 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 433397 00:07:00.449 11:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.708 lslocks: write error 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 433397 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 433397 ']' 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 433397 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 433397 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 433397' 00:07:00.708 killing process with pid 433397 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 433397 00:07:00.708 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 433397 00:07:00.968 00:07:00.968 real 0m2.251s 00:07:00.968 user 0m2.472s 00:07:00.968 sys 0m0.616s 00:07:00.968 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.968 11:17:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.968 ************************************ 00:07:00.968 END TEST locking_app_on_locked_coremask 00:07:00.968 ************************************ 00:07:01.227 11:17:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.227 11:17:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.227 11:17:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.227 11:17:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.227 11:17:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.227 ************************************ 00:07:01.227 START TEST locking_overlapped_coremask 00:07:01.227 ************************************ 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=433893 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 433893 /var/tmp/spdk.sock 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 433893 ']' 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.227 11:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.227 [2024-07-15 11:17:44.658350] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:01.227 [2024-07-15 11:17:44.658394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433893 ] 00:07:01.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.227 [2024-07-15 11:17:44.729535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.227 [2024-07-15 11:17:44.812526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.227 [2024-07-15 11:17:44.812544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.227 [2024-07-15 11:17:44.812546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=433902 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 433902 /var/tmp/spdk2.sock 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 433902 /var/tmp/spdk2.sock 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 433902 /var/tmp/spdk2.sock 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 433902 ']' 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.165 11:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.165 [2024-07-15 11:17:45.512566] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:02.165 [2024-07-15 11:17:45.512609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433902 ] 00:07:02.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.165 [2024-07-15 11:17:45.588754] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 433893 has claimed it. 00:07:02.165 [2024-07-15 11:17:45.588792] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (433902) - No such process 00:07:02.733 ERROR: process (pid: 433902) is no longer running 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 433893 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 433893 ']' 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 433893 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 433893 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 433893' 00:07:02.733 killing process with pid 433893 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 433893 00:07:02.733 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 433893 00:07:02.992 00:07:02.992 real 0m1.898s 00:07:02.993 user 0m5.285s 00:07:02.993 sys 0m0.428s 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.993 ************************************ 00:07:02.993 END TEST locking_overlapped_coremask 00:07:02.993 ************************************ 00:07:02.993 11:17:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:02.993 11:17:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.993 11:17:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.993 11:17:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.993 11:17:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.993 ************************************ 00:07:02.993 START TEST locking_overlapped_coremask_via_rpc 00:07:02.993 ************************************ 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=434165 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 434165 /var/tmp/spdk.sock 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 434165 ']' 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.993 11:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.252 [2024-07-15 11:17:46.623337] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:03.252 [2024-07-15 11:17:46.623381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434165 ] 00:07:03.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.252 [2024-07-15 11:17:46.691023] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.252 [2024-07-15 11:17:46.691050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.252 [2024-07-15 11:17:46.761344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.252 [2024-07-15 11:17:46.761453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.252 [2024-07-15 11:17:46.761453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=434392 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 434392 /var/tmp/spdk2.sock 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 434392 ']' 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.190 11:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.190 [2024-07-15 11:17:47.474615] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:04.190 [2024-07-15 11:17:47.474663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434392 ] 00:07:04.190 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.190 [2024-07-15 11:17:47.549842] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.190 [2024-07-15 11:17:47.549874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.190 [2024-07-15 11:17:47.695749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.190 [2024-07-15 11:17:47.699271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.190 [2024-07-15 11:17:47.699272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 [2024-07-15 11:17:48.291300] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 434165 has claimed it. 00:07:04.758 request: 00:07:04.758 { 00:07:04.758 "method": "framework_enable_cpumask_locks", 00:07:04.758 "req_id": 1 00:07:04.758 } 00:07:04.758 Got JSON-RPC error response 00:07:04.758 response: 00:07:04.758 { 00:07:04.758 "code": -32603, 00:07:04.758 "message": "Failed to claim CPU core: 2" 00:07:04.758 } 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 434165 /var/tmp/spdk.sock 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 434165 ']' 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.758 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 434392 /var/tmp/spdk2.sock 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 434392 ']' 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.016 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.275 00:07:05.275 real 0m2.109s 00:07:05.275 user 0m0.866s 00:07:05.275 sys 0m0.171s 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.275 11:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.275 ************************************ 00:07:05.275 END TEST locking_overlapped_coremask_via_rpc 00:07:05.275 ************************************ 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:05.275 11:17:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.275 11:17:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 434165 ]] 00:07:05.275 11:17:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 434165 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 434165 ']' 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 434165 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434165 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.275 11:17:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.276 11:17:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434165' 00:07:05.276 killing process with pid 434165 00:07:05.276 11:17:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 434165 00:07:05.276 11:17:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 434165 00:07:05.534 11:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 434392 ]] 00:07:05.534 11:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 434392 00:07:05.534 11:17:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 434392 ']' 00:07:05.534 11:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 434392 00:07:05.534 11:17:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.534 11:17:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.534 11:17:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434392 00:07:05.791 11:17:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:05.791 11:17:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:05.791 11:17:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434392' 00:07:05.791 killing process with pid 434392 00:07:05.791 11:17:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 434392 00:07:05.791 11:17:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 434392 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 434165 ]] 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 434165 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 434165 ']' 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 434165 00:07:06.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (434165) - No such process 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 434165 is not found' 00:07:06.050 Process with pid 434165 is not found 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 434392 ]] 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 434392 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 434392 ']' 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 434392 00:07:06.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (434392) - No such process 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 434392 is not found' 00:07:06.050 Process with pid 434392 is not found 00:07:06.050 11:17:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.050 00:07:06.050 real 0m17.002s 00:07:06.050 user 0m29.255s 00:07:06.050 sys 0m4.934s 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.050 11:17:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.050 ************************************ 00:07:06.050 END TEST cpu_locks 00:07:06.050 ************************************ 00:07:06.050 11:17:49 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.050 00:07:06.050 real 0m42.367s 00:07:06.050 user 1m20.668s 00:07:06.050 sys 0m8.311s 00:07:06.050 11:17:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.050 11:17:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.050 ************************************ 00:07:06.050 END TEST event 00:07:06.050 ************************************ 00:07:06.050 11:17:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.050 11:17:49 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.050 11:17:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.050 11:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.050 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:07:06.050 ************************************ 00:07:06.050 START TEST thread 00:07:06.050 ************************************ 00:07:06.050 11:17:49 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.050 * Looking for test storage... 00:07:06.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:06.309 11:17:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.309 11:17:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:06.309 11:17:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.309 11:17:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 ************************************ 00:07:06.309 START TEST thread_poller_perf 00:07:06.309 ************************************ 00:07:06.309 11:17:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.309 [2024-07-15 11:17:49.698396] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:06.309 [2024-07-15 11:17:49.698475] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434761 ] 00:07:06.309 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.309 [2024-07-15 11:17:49.760441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.309 [2024-07-15 11:17:49.841577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.309 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.682 ====================================== 00:07:07.682 busy:2306321946 (cyc) 00:07:07.682 total_run_count: 409000 00:07:07.682 tsc_hz: 2300000000 (cyc) 00:07:07.682 ====================================== 00:07:07.682 poller_cost: 5638 (cyc), 2451 (nsec) 00:07:07.682 00:07:07.682 real 0m1.241s 00:07:07.682 user 0m1.155s 00:07:07.682 sys 0m0.080s 00:07:07.682 11:17:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.682 11:17:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.682 ************************************ 00:07:07.682 END TEST thread_poller_perf 00:07:07.682 ************************************ 00:07:07.682 11:17:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:07.682 11:17:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.682 11:17:50 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:07.682 11:17:50 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.682 11:17:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.682 ************************************ 00:07:07.682 START TEST thread_poller_perf 00:07:07.682 ************************************ 00:07:07.682 11:17:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.682 [2024-07-15 11:17:51.008803] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:07.682 [2024-07-15 11:17:51.008872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434987 ] 00:07:07.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.682 [2024-07-15 11:17:51.080418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.682 [2024-07-15 11:17:51.151712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.682 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:09.056 ====================================== 00:07:09.056 busy:2301662052 (cyc) 00:07:09.056 total_run_count: 5505000 00:07:09.056 tsc_hz: 2300000000 (cyc) 00:07:09.056 ====================================== 00:07:09.056 poller_cost: 418 (cyc), 181 (nsec) 00:07:09.056 00:07:09.056 real 0m1.231s 00:07:09.056 user 0m1.149s 00:07:09.056 sys 0m0.079s 00:07:09.056 11:17:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.056 11:17:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 ************************************ 00:07:09.056 END TEST thread_poller_perf 00:07:09.056 ************************************ 00:07:09.056 11:17:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:09.056 11:17:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:09.056 00:07:09.056 real 0m2.691s 00:07:09.056 user 0m2.390s 00:07:09.056 sys 0m0.309s 00:07:09.056 11:17:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.056 11:17:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 ************************************ 00:07:09.056 END TEST thread 00:07:09.056 ************************************ 00:07:09.056 11:17:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.056 11:17:52 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:09.056 11:17:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.056 11:17:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.056 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 ************************************ 00:07:09.056 START TEST accel 00:07:09.056 ************************************ 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:09.056 * Looking for test storage... 00:07:09.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:09.056 11:17:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:09.056 11:17:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:09.056 11:17:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.056 11:17:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=435288 00:07:09.056 11:17:52 accel -- accel/accel.sh@63 -- # waitforlisten 435288 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@829 -- # '[' -z 435288 ']' 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.056 11:17:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.056 11:17:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.056 11:17:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.056 11:17:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.056 11:17:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.056 11:17:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.056 11:17:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.056 11:17:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.056 11:17:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:09.056 11:17:52 accel -- accel/accel.sh@41 -- # jq -r . 00:07:09.056 [2024-07-15 11:17:52.457594] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:09.056 [2024-07-15 11:17:52.457646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435288 ] 00:07:09.056 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.056 [2024-07-15 11:17:52.525911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.056 [2024-07-15 11:17:52.604749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@862 -- # return 0 00:07:09.992 11:17:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:09.992 11:17:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:09.992 11:17:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:09.992 11:17:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:09.992 11:17:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:09.992 11:17:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:09.992 11:17:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # IFS== 00:07:09.992 11:17:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:09.992 11:17:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:09.992 11:17:53 accel -- accel/accel.sh@75 -- # killprocess 435288 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@948 -- # '[' -z 435288 ']' 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@952 -- # kill -0 435288 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@953 -- # uname 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 435288 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 435288' 00:07:09.992 killing process with pid 435288 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@967 -- # kill 435288 00:07:09.992 11:17:53 accel -- common/autotest_common.sh@972 -- # wait 435288 00:07:10.252 11:17:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:10.252 11:17:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.252 11:17:53 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:10.252 11:17:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:10.252 11:17:53 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.252 11:17:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.252 11:17:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.252 11:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.252 ************************************ 00:07:10.252 START TEST accel_missing_filename 00:07:10.252 ************************************ 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.252 11:17:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:10.252 11:17:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:10.252 [2024-07-15 11:17:53.811676] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:10.252 [2024-07-15 11:17:53.811729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435558 ] 00:07:10.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.512 [2024-07-15 11:17:53.882677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.512 [2024-07-15 11:17:53.958575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.512 [2024-07-15 11:17:53.999932] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.512 [2024-07-15 11:17:54.060315] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:10.771 A filename is required. 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.771 00:07:10.771 real 0m0.350s 00:07:10.771 user 0m0.261s 00:07:10.771 sys 0m0.127s 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.771 11:17:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:10.771 ************************************ 00:07:10.771 END TEST accel_missing_filename 00:07:10.771 ************************************ 00:07:10.771 11:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.771 11:17:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.771 11:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:10.771 11:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.771 11:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.771 ************************************ 00:07:10.771 START TEST accel_compress_verify 00:07:10.771 ************************************ 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.771 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:10.772 11:17:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:10.772 [2024-07-15 11:17:54.229306] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:10.772 [2024-07-15 11:17:54.229362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435778 ] 00:07:10.772 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.772 [2024-07-15 11:17:54.299278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.031 [2024-07-15 11:17:54.370717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.031 [2024-07-15 11:17:54.411955] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.031 [2024-07-15 11:17:54.471810] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:11.031 00:07:11.031 Compression does not support the verify option, aborting. 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.031 00:07:11.031 real 0m0.343s 00:07:11.031 user 0m0.252s 00:07:11.031 sys 0m0.128s 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.031 11:17:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.031 ************************************ 00:07:11.031 END TEST accel_compress_verify 00:07:11.031 ************************************ 00:07:11.031 11:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.031 11:17:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:11.031 11:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.031 11:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.031 11:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.031 ************************************ 00:07:11.031 START TEST accel_wrong_workload 00:07:11.031 ************************************ 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.031 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:11.031 11:17:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:11.291 Unsupported workload type: foobar 00:07:11.291 [2024-07-15 11:17:54.637775] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:11.291 accel_perf options: 00:07:11.291 [-h help message] 00:07:11.291 [-q queue depth per core] 00:07:11.291 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.291 [-T number of threads per core 00:07:11.291 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.291 [-t time in seconds] 00:07:11.291 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.291 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:11.291 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.291 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.291 [-S for crc32c workload, use this seed value (default 0) 00:07:11.291 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.291 [-f for fill workload, use this BYTE value (default 255) 00:07:11.291 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.291 [-y verify result if this switch is on] 00:07:11.291 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.291 Can be used to spread operations across a wider range of memory. 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.291 00:07:11.291 real 0m0.035s 00:07:11.291 user 0m0.023s 00:07:11.291 sys 0m0.011s 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.291 11:17:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:11.291 ************************************ 00:07:11.291 END TEST accel_wrong_workload 00:07:11.291 ************************************ 00:07:11.291 Error: writing output failed: Broken pipe 00:07:11.291 11:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.292 11:17:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.292 ************************************ 00:07:11.292 START TEST accel_negative_buffers 00:07:11.292 ************************************ 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:11.292 11:17:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:11.292 -x option must be non-negative. 00:07:11.292 [2024-07-15 11:17:54.736076] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:11.292 accel_perf options: 00:07:11.292 [-h help message] 00:07:11.292 [-q queue depth per core] 00:07:11.292 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.292 [-T number of threads per core 00:07:11.292 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.292 [-t time in seconds] 00:07:11.292 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.292 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:11.292 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.292 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.292 [-S for crc32c workload, use this seed value (default 0) 00:07:11.292 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.292 [-f for fill workload, use this BYTE value (default 255) 00:07:11.292 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.292 [-y verify result if this switch is on] 00:07:11.292 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.292 Can be used to spread operations across a wider range of memory. 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.292 00:07:11.292 real 0m0.034s 00:07:11.292 user 0m0.016s 00:07:11.292 sys 0m0.018s 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.292 11:17:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:11.292 ************************************ 00:07:11.292 END TEST accel_negative_buffers 00:07:11.292 ************************************ 00:07:11.292 Error: writing output failed: Broken pipe 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.292 11:17:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.292 11:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.292 ************************************ 00:07:11.292 START TEST accel_crc32c 00:07:11.292 ************************************ 00:07:11.292 11:17:54 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:11.292 11:17:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:11.292 [2024-07-15 11:17:54.832642] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:11.292 [2024-07-15 11:17:54.832706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435850 ] 00:07:11.292 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.555 [2024-07-15 11:17:54.900010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.555 [2024-07-15 11:17:54.972978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.555 11:17:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:12.968 11:17:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.968 00:07:12.968 real 0m1.350s 00:07:12.968 user 0m1.236s 00:07:12.969 sys 0m0.127s 00:07:12.969 11:17:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.969 11:17:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:12.969 ************************************ 00:07:12.969 END TEST accel_crc32c 00:07:12.969 ************************************ 00:07:12.969 11:17:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.969 11:17:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:12.969 11:17:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.969 11:17:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.969 11:17:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.969 ************************************ 00:07:12.969 START TEST accel_crc32c_C2 00:07:12.969 ************************************ 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:12.969 [2024-07-15 11:17:56.249941] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:12.969 [2024-07-15 11:17:56.250003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436098 ] 00:07:12.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.969 [2024-07-15 11:17:56.320710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.969 [2024-07-15 11:17:56.392284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.969 11:17:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.359 00:07:14.359 real 0m1.350s 00:07:14.359 user 0m1.243s 00:07:14.359 sys 0m0.121s 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.359 11:17:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:14.359 ************************************ 00:07:14.359 END TEST accel_crc32c_C2 00:07:14.359 ************************************ 00:07:14.359 11:17:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.359 11:17:57 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:14.359 11:17:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.359 11:17:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.359 11:17:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.359 ************************************ 00:07:14.359 START TEST accel_copy 00:07:14.359 ************************************ 00:07:14.359 11:17:57 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:14.359 [2024-07-15 11:17:57.667688] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:14.359 [2024-07-15 11:17:57.667749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436348 ] 00:07:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.359 [2024-07-15 11:17:57.738299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.359 [2024-07-15 11:17:57.811583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.359 11:17:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:15.737 11:17:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.737 00:07:15.737 real 0m1.351s 00:07:15.737 user 0m1.240s 00:07:15.737 sys 0m0.123s 00:07:15.737 11:17:58 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.737 11:17:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 ************************************ 00:07:15.737 END TEST accel_copy 00:07:15.737 ************************************ 00:07:15.737 11:17:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.737 11:17:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.737 11:17:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:15.737 11:17:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.737 11:17:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 ************************************ 00:07:15.737 START TEST accel_fill 00:07:15.737 ************************************ 00:07:15.737 11:17:59 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:15.737 [2024-07-15 11:17:59.088138] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:15.737 [2024-07-15 11:17:59.088189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436601 ] 00:07:15.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.737 [2024-07-15 11:17:59.140891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.737 [2024-07-15 11:17:59.213344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.737 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.738 11:17:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:17.116 11:18:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.116 00:07:17.116 real 0m1.334s 00:07:17.116 user 0m1.238s 00:07:17.116 sys 0m0.109s 00:07:17.116 11:18:00 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.116 11:18:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:17.116 ************************************ 00:07:17.116 END TEST accel_fill 00:07:17.116 ************************************ 00:07:17.116 11:18:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.116 11:18:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:17.116 11:18:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.116 11:18:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.116 11:18:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.116 ************************************ 00:07:17.116 START TEST accel_copy_crc32c 00:07:17.116 ************************************ 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:17.116 [2024-07-15 11:18:00.490482] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:17.116 [2024-07-15 11:18:00.490550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436884 ] 00:07:17.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.116 [2024-07-15 11:18:00.559379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.116 [2024-07-15 11:18:00.631855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.116 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.117 11:18:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.495 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.496 00:07:18.496 real 0m1.351s 00:07:18.496 user 0m1.233s 00:07:18.496 sys 0m0.131s 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.496 11:18:01 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 ************************************ 00:07:18.496 END TEST accel_copy_crc32c 00:07:18.496 ************************************ 00:07:18.496 11:18:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.496 11:18:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:18.496 11:18:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:18.496 11:18:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.496 11:18:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 ************************************ 00:07:18.496 START TEST accel_copy_crc32c_C2 00:07:18.496 ************************************ 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:18.496 11:18:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:18.496 [2024-07-15 11:18:01.905847] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:18.496 [2024-07-15 11:18:01.905904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437218 ] 00:07:18.496 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.496 [2024-07-15 11:18:01.975101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.496 [2024-07-15 11:18:02.049035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.755 11:18:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.693 00:07:19.693 real 0m1.350s 00:07:19.693 user 0m1.240s 00:07:19.693 sys 0m0.125s 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.693 11:18:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:19.693 ************************************ 00:07:19.693 END TEST accel_copy_crc32c_C2 00:07:19.693 ************************************ 00:07:19.693 11:18:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.693 11:18:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:19.693 11:18:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.693 11:18:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.693 11:18:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.951 ************************************ 00:07:19.951 START TEST accel_dualcast 00:07:19.951 ************************************ 00:07:19.951 11:18:03 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:19.951 [2024-07-15 11:18:03.326275] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:19.951 [2024-07-15 11:18:03.326330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437472 ] 00:07:19.951 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.951 [2024-07-15 11:18:03.395283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.951 [2024-07-15 11:18:03.474263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.951 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.952 11:18:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:21.328 11:18:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.328 00:07:21.328 real 0m1.357s 00:07:21.328 user 0m1.250s 00:07:21.328 sys 0m0.119s 00:07:21.328 11:18:04 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.328 11:18:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:21.328 ************************************ 00:07:21.328 END TEST accel_dualcast 00:07:21.328 ************************************ 00:07:21.328 11:18:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.328 11:18:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:21.328 11:18:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.328 11:18:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.328 11:18:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.328 ************************************ 00:07:21.328 START TEST accel_compare 00:07:21.328 ************************************ 00:07:21.328 11:18:04 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:21.328 11:18:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:21.328 [2024-07-15 11:18:04.749418] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:21.328 [2024-07-15 11:18:04.749480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437723 ] 00:07:21.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.328 [2024-07-15 11:18:04.817418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.328 [2024-07-15 11:18:04.896362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.588 11:18:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:22.526 11:18:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.526 00:07:22.526 real 0m1.357s 00:07:22.526 user 0m1.242s 00:07:22.526 sys 0m0.126s 00:07:22.526 11:18:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.526 11:18:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:22.526 ************************************ 00:07:22.526 END TEST accel_compare 00:07:22.526 ************************************ 00:07:22.526 11:18:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.526 11:18:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:22.526 11:18:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.526 11:18:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.526 11:18:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.786 ************************************ 00:07:22.786 START TEST accel_xor 00:07:22.786 ************************************ 00:07:22.786 11:18:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:22.786 [2024-07-15 11:18:06.169705] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:22.786 [2024-07-15 11:18:06.169754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437974 ] 00:07:22.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.786 [2024-07-15 11:18:06.239068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.786 [2024-07-15 11:18:06.313974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.786 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.787 11:18:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.166 00:07:24.166 real 0m1.352s 00:07:24.166 user 0m1.233s 00:07:24.166 sys 0m0.130s 00:07:24.166 11:18:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.166 11:18:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:24.166 ************************************ 00:07:24.166 END TEST accel_xor 00:07:24.166 ************************************ 00:07:24.166 11:18:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.166 11:18:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:24.166 11:18:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.166 11:18:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.166 11:18:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.166 ************************************ 00:07:24.166 START TEST accel_xor 00:07:24.166 ************************************ 00:07:24.166 11:18:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.166 11:18:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.167 11:18:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.167 11:18:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.167 11:18:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:24.167 11:18:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:24.167 [2024-07-15 11:18:07.591986] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:24.167 [2024-07-15 11:18:07.592044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438225 ] 00:07:24.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.167 [2024-07-15 11:18:07.659781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.167 [2024-07-15 11:18:07.733639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.426 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 11:18:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.359 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.359 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.359 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.359 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:25.360 11:18:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.360 00:07:25.360 real 0m1.351s 00:07:25.360 user 0m1.235s 00:07:25.360 sys 0m0.129s 00:07:25.360 11:18:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.360 11:18:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:25.360 ************************************ 00:07:25.360 END TEST accel_xor 00:07:25.360 ************************************ 00:07:25.618 11:18:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.619 11:18:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:25.619 11:18:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:25.619 11:18:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.619 11:18:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.619 ************************************ 00:07:25.619 START TEST accel_dif_verify 00:07:25.619 ************************************ 00:07:25.619 11:18:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:25.619 11:18:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:25.619 [2024-07-15 11:18:09.013720] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:25.619 [2024-07-15 11:18:09.013792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438717 ] 00:07:25.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.619 [2024-07-15 11:18:09.084321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.619 [2024-07-15 11:18:09.164765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.619 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.877 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.878 11:18:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:26.814 11:18:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.814 00:07:26.814 real 0m1.362s 00:07:26.814 user 0m1.247s 00:07:26.814 sys 0m0.128s 00:07:26.814 11:18:10 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.814 11:18:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.814 ************************************ 00:07:26.814 END TEST accel_dif_verify 00:07:26.814 ************************************ 00:07:26.814 11:18:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.814 11:18:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:26.814 11:18:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:26.814 11:18:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.814 11:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.074 ************************************ 00:07:27.074 START TEST accel_dif_generate 00:07:27.074 ************************************ 00:07:27.074 11:18:10 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:27.074 [2024-07-15 11:18:10.442698] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:27.074 [2024-07-15 11:18:10.442745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439132 ] 00:07:27.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.074 [2024-07-15 11:18:10.509937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.074 [2024-07-15 11:18:10.584788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.074 11:18:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:28.452 11:18:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.452 00:07:28.452 real 0m1.350s 00:07:28.452 user 0m1.236s 00:07:28.452 sys 0m0.129s 00:07:28.452 11:18:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.452 11:18:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 ************************************ 00:07:28.453 END TEST accel_dif_generate 00:07:28.453 ************************************ 00:07:28.453 11:18:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.453 11:18:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:28.453 11:18:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:28.453 11:18:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.453 11:18:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.453 ************************************ 00:07:28.453 START TEST accel_dif_generate_copy 00:07:28.453 ************************************ 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:28.453 11:18:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:28.453 [2024-07-15 11:18:11.862145] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:28.453 [2024-07-15 11:18:11.862202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439400 ] 00:07:28.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.453 [2024-07-15 11:18:11.933264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.453 [2024-07-15 11:18:12.005745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.712 11:18:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.648 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.649 00:07:29.649 real 0m1.352s 00:07:29.649 user 0m1.242s 00:07:29.649 sys 0m0.123s 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.649 11:18:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.649 ************************************ 00:07:29.649 END TEST accel_dif_generate_copy 00:07:29.649 ************************************ 00:07:29.649 11:18:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.649 11:18:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:29.649 11:18:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.649 11:18:13 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.649 11:18:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.649 11:18:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.908 ************************************ 00:07:29.908 START TEST accel_comp 00:07:29.908 ************************************ 00:07:29.908 11:18:13 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:29.908 [2024-07-15 11:18:13.285038] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:29.908 [2024-07-15 11:18:13.285089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439676 ] 00:07:29.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.908 [2024-07-15 11:18:13.351597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.908 [2024-07-15 11:18:13.424305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.908 11:18:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.282 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.282 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:31.283 11:18:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.283 00:07:31.283 real 0m1.349s 00:07:31.283 user 0m1.239s 00:07:31.283 sys 0m0.125s 00:07:31.283 11:18:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.283 11:18:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:31.283 ************************************ 00:07:31.283 END TEST accel_comp 00:07:31.283 ************************************ 00:07:31.283 11:18:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.283 11:18:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.283 11:18:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:31.283 11:18:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.283 11:18:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.283 ************************************ 00:07:31.283 START TEST accel_decomp 00:07:31.283 ************************************ 00:07:31.283 11:18:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:31.283 11:18:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:31.283 [2024-07-15 11:18:14.703159] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:31.283 [2024-07-15 11:18:14.703212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439940 ] 00:07:31.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.283 [2024-07-15 11:18:14.769682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.283 [2024-07-15 11:18:14.841717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.541 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.542 11:18:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.479 11:18:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.479 00:07:32.479 real 0m1.346s 00:07:32.479 user 0m1.240s 00:07:32.479 sys 0m0.121s 00:07:32.479 11:18:16 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.479 11:18:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:32.479 ************************************ 00:07:32.479 END TEST accel_decomp 00:07:32.479 ************************************ 00:07:32.479 11:18:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.479 11:18:16 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.479 11:18:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:32.479 11:18:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.479 11:18:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.769 ************************************ 00:07:32.769 START TEST accel_decomp_full 00:07:32.769 ************************************ 00:07:32.769 11:18:16 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:32.769 [2024-07-15 11:18:16.120220] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:32.769 [2024-07-15 11:18:16.120284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440215 ] 00:07:32.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.769 [2024-07-15 11:18:16.189466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.769 [2024-07-15 11:18:16.266019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.769 11:18:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.146 11:18:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.146 00:07:34.146 real 0m1.367s 00:07:34.146 user 0m1.259s 00:07:34.146 sys 0m0.122s 00:07:34.146 11:18:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.146 11:18:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:34.146 ************************************ 00:07:34.146 END TEST accel_decomp_full 00:07:34.146 ************************************ 00:07:34.146 11:18:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.146 11:18:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.146 11:18:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:34.146 11:18:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.146 11:18:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.146 ************************************ 00:07:34.146 START TEST accel_decomp_mcore 00:07:34.146 ************************************ 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:34.146 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:34.146 [2024-07-15 11:18:17.555818] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:34.146 [2024-07-15 11:18:17.555874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440491 ] 00:07:34.146 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.146 [2024-07-15 11:18:17.625723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.146 [2024-07-15 11:18:17.700775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.146 [2024-07-15 11:18:17.700883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.146 [2024-07-15 11:18:17.700988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.146 [2024-07-15 11:18:17.700989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.404 11:18:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.340 00:07:35.340 real 0m1.362s 00:07:35.340 user 0m4.564s 00:07:35.340 sys 0m0.140s 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.340 11:18:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 ************************************ 00:07:35.340 END TEST accel_decomp_mcore 00:07:35.340 ************************************ 00:07:35.340 11:18:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.340 11:18:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.340 11:18:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:35.340 11:18:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.340 11:18:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 ************************************ 00:07:35.599 START TEST accel_decomp_full_mcore 00:07:35.599 ************************************ 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.599 11:18:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.599 [2024-07-15 11:18:18.987877] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:35.599 [2024-07-15 11:18:18.987928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440772 ] 00:07:35.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.600 [2024-07-15 11:18:19.054600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.600 [2024-07-15 11:18:19.129106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.600 [2024-07-15 11:18:19.129319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.600 [2024-07-15 11:18:19.129320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.600 [2024-07-15 11:18:19.129214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.600 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.860 11:18:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.796 00:07:36.796 real 0m1.369s 00:07:36.796 user 0m4.603s 00:07:36.796 sys 0m0.134s 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.796 11:18:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.796 ************************************ 00:07:36.796 END TEST accel_decomp_full_mcore 00:07:36.796 ************************************ 00:07:36.796 11:18:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.796 11:18:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.796 11:18:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:36.796 11:18:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.796 11:18:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.056 ************************************ 00:07:37.056 START TEST accel_decomp_mthread 00:07:37.056 ************************************ 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:37.056 [2024-07-15 11:18:20.426497] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:37.056 [2024-07-15 11:18:20.426559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441065 ] 00:07:37.056 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.056 [2024-07-15 11:18:20.498490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.056 [2024-07-15 11:18:20.571606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.056 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.057 11:18:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.434 00:07:38.434 real 0m1.360s 00:07:38.434 user 0m1.253s 00:07:38.434 sys 0m0.121s 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.434 11:18:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 ************************************ 00:07:38.434 END TEST accel_decomp_mthread 00:07:38.434 ************************************ 00:07:38.434 11:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.434 11:18:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.434 11:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:38.434 11:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.434 11:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 ************************************ 00:07:38.434 START TEST accel_decomp_full_mthread 00:07:38.434 ************************************ 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:38.434 11:18:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:38.434 [2024-07-15 11:18:21.853450] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:38.434 [2024-07-15 11:18:21.853512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441331 ] 00:07:38.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.434 [2024-07-15 11:18:21.924419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.434 [2024-07-15 11:18:21.996828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.693 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.694 11:18:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.631 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.632 00:07:39.632 real 0m1.376s 00:07:39.632 user 0m1.261s 00:07:39.632 sys 0m0.127s 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.632 11:18:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.632 ************************************ 00:07:39.632 END TEST accel_decomp_full_mthread 00:07:39.632 ************************************ 00:07:39.889 11:18:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.889 11:18:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:39.889 11:18:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.889 11:18:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:39.889 11:18:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.889 11:18:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.889 11:18:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.889 11:18:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.889 11:18:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.889 11:18:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.889 11:18:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.889 11:18:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.889 11:18:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.889 11:18:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.889 ************************************ 00:07:39.889 START TEST accel_dif_functional_tests 00:07:39.889 ************************************ 00:07:39.890 11:18:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.890 [2024-07-15 11:18:23.316787] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:39.890 [2024-07-15 11:18:23.316825] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441585 ] 00:07:39.890 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.890 [2024-07-15 11:18:23.380510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.890 [2024-07-15 11:18:23.453927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.890 [2024-07-15 11:18:23.454035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.890 [2024-07-15 11:18:23.454036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.148 00:07:40.148 00:07:40.148 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.148 http://cunit.sourceforge.net/ 00:07:40.148 00:07:40.148 00:07:40.148 Suite: accel_dif 00:07:40.148 Test: verify: DIF generated, GUARD check ...passed 00:07:40.148 Test: verify: DIF generated, APPTAG check ...passed 00:07:40.148 Test: verify: DIF generated, REFTAG check ...passed 00:07:40.148 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:18:23.523187] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:40.148 passed 00:07:40.148 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:18:23.523241] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:40.148 passed 00:07:40.148 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:18:23.523259] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:40.148 passed 00:07:40.148 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:40.148 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:18:23.523303] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:40.148 passed 00:07:40.148 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:40.148 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:40.148 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:40.148 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:18:23.523403] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:40.148 passed 00:07:40.148 Test: verify copy: DIF generated, GUARD check ...passed 00:07:40.148 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:40.148 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:40.148 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:18:23.523507] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:40.148 passed 00:07:40.148 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:18:23.523528] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:40.148 passed 00:07:40.148 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:18:23.523548] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:40.148 passed 00:07:40.148 Test: generate copy: DIF generated, GUARD check ...passed 00:07:40.148 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:40.148 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:40.148 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:40.148 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:40.148 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:40.148 Test: generate copy: iovecs-len validate ...[2024-07-15 11:18:23.523703] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:40.148 passed 00:07:40.148 Test: generate copy: buffer alignment validate ...passed 00:07:40.148 00:07:40.148 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.148 suites 1 1 n/a 0 0 00:07:40.148 tests 26 26 26 0 0 00:07:40.148 asserts 115 115 115 0 n/a 00:07:40.148 00:07:40.148 Elapsed time = 0.002 seconds 00:07:40.148 00:07:40.148 real 0m0.419s 00:07:40.148 user 0m0.621s 00:07:40.148 sys 0m0.154s 00:07:40.148 11:18:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.148 11:18:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:40.148 ************************************ 00:07:40.148 END TEST accel_dif_functional_tests 00:07:40.148 ************************************ 00:07:40.148 11:18:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.148 00:07:40.148 real 0m31.406s 00:07:40.148 user 0m34.936s 00:07:40.148 sys 0m4.516s 00:07:40.148 11:18:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.148 11:18:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.148 ************************************ 00:07:40.148 END TEST accel 00:07:40.148 ************************************ 00:07:40.406 11:18:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:40.406 11:18:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:40.406 11:18:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.406 11:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.406 11:18:23 -- common/autotest_common.sh@10 -- # set +x 00:07:40.406 ************************************ 00:07:40.406 START TEST accel_rpc 00:07:40.406 ************************************ 00:07:40.406 11:18:23 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:40.406 * Looking for test storage... 00:07:40.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:40.406 11:18:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:40.406 11:18:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=441655 00:07:40.406 11:18:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 441655 00:07:40.406 11:18:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:40.406 11:18:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 441655 ']' 00:07:40.407 11:18:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.407 11:18:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.407 11:18:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.407 11:18:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.407 11:18:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.407 [2024-07-15 11:18:23.933440] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:40.407 [2024-07-15 11:18:23.933490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441655 ] 00:07:40.407 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.665 [2024-07-15 11:18:23.998371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.665 [2024-07-15 11:18:24.077828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.233 11:18:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.233 11:18:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:41.233 11:18:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:41.233 11:18:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:41.233 11:18:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:41.233 11:18:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:41.233 11:18:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:41.233 11:18:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.233 11:18:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.233 11:18:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.233 ************************************ 00:07:41.233 START TEST accel_assign_opcode 00:07:41.233 ************************************ 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.233 [2024-07-15 11:18:24.767905] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.233 [2024-07-15 11:18:24.775916] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:41.233 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.234 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.493 software 00:07:41.493 00:07:41.493 real 0m0.237s 00:07:41.493 user 0m0.045s 00:07:41.493 sys 0m0.010s 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.493 11:18:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.493 ************************************ 00:07:41.493 END TEST accel_assign_opcode 00:07:41.493 ************************************ 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:41.493 11:18:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 441655 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 441655 ']' 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 441655 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 441655 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 441655' 00:07:41.493 killing process with pid 441655 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 441655 00:07:41.493 11:18:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 441655 00:07:42.064 00:07:42.064 real 0m1.581s 00:07:42.064 user 0m1.653s 00:07:42.064 sys 0m0.423s 00:07:42.064 11:18:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.064 11:18:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.064 ************************************ 00:07:42.064 END TEST accel_rpc 00:07:42.064 ************************************ 00:07:42.064 11:18:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:42.064 11:18:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:42.064 11:18:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.064 11:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.064 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:42.064 ************************************ 00:07:42.064 START TEST app_cmdline 00:07:42.064 ************************************ 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:42.064 * Looking for test storage... 00:07:42.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:42.064 11:18:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:42.064 11:18:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=441964 00:07:42.064 11:18:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 441964 00:07:42.064 11:18:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 441964 ']' 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.064 11:18:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.064 [2024-07-15 11:18:25.582999] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:42.064 [2024-07-15 11:18:25.583051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441964 ] 00:07:42.064 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.064 [2024-07-15 11:18:25.650509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.323 [2024-07-15 11:18:25.729687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.890 11:18:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.890 11:18:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:42.890 11:18:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:43.149 { 00:07:43.149 "version": "SPDK v24.09-pre git sha1 e7cce062d", 00:07:43.149 "fields": { 00:07:43.149 "major": 24, 00:07:43.149 "minor": 9, 00:07:43.149 "patch": 0, 00:07:43.149 "suffix": "-pre", 00:07:43.149 "commit": "e7cce062d" 00:07:43.149 } 00:07:43.149 } 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.149 11:18:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:43.149 11:18:26 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.409 request: 00:07:43.409 { 00:07:43.409 "method": "env_dpdk_get_mem_stats", 00:07:43.409 "req_id": 1 00:07:43.409 } 00:07:43.409 Got JSON-RPC error response 00:07:43.409 response: 00:07:43.409 { 00:07:43.409 "code": -32601, 00:07:43.409 "message": "Method not found" 00:07:43.409 } 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.409 11:18:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 441964 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 441964 ']' 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 441964 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 441964 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 441964' 00:07:43.409 killing process with pid 441964 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 441964 00:07:43.409 11:18:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 441964 00:07:43.669 00:07:43.669 real 0m1.695s 00:07:43.669 user 0m2.041s 00:07:43.669 sys 0m0.423s 00:07:43.669 11:18:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.669 11:18:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.669 ************************************ 00:07:43.669 END TEST app_cmdline 00:07:43.669 ************************************ 00:07:43.669 11:18:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.669 11:18:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.669 11:18:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.669 11:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.669 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.669 ************************************ 00:07:43.669 START TEST version 00:07:43.669 ************************************ 00:07:43.669 11:18:27 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:43.929 * Looking for test storage... 00:07:43.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.929 11:18:27 version -- app/version.sh@17 -- # get_header_version major 00:07:43.929 11:18:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # cut -f2 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.929 11:18:27 version -- app/version.sh@17 -- # major=24 00:07:43.929 11:18:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.929 11:18:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # cut -f2 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.929 11:18:27 version -- app/version.sh@18 -- # minor=9 00:07:43.929 11:18:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.929 11:18:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # cut -f2 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.929 11:18:27 version -- app/version.sh@19 -- # patch=0 00:07:43.929 11:18:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.929 11:18:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # cut -f2 00:07:43.929 11:18:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.929 11:18:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.929 11:18:27 version -- app/version.sh@22 -- # version=24.9 00:07:43.929 11:18:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.929 11:18:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:43.929 11:18:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.929 11:18:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.929 11:18:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:43.929 11:18:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:43.929 00:07:43.929 real 0m0.158s 00:07:43.929 user 0m0.076s 00:07:43.929 sys 0m0.119s 00:07:43.929 11:18:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.929 11:18:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.929 ************************************ 00:07:43.929 END TEST version 00:07:43.929 ************************************ 00:07:43.929 11:18:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.929 11:18:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:43.929 11:18:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:43.929 11:18:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.929 11:18:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.929 11:18:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:43.929 11:18:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.929 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.929 11:18:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:43.929 11:18:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:43.929 11:18:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.929 11:18:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.929 11:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.929 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:43.929 ************************************ 00:07:43.929 START TEST nvmf_tcp 00:07:43.929 ************************************ 00:07:43.929 11:18:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:44.189 * Looking for test storage... 00:07:44.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.190 11:18:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.190 11:18:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.190 11:18:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.190 11:18:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:44.190 11:18:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:44.190 11:18:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.190 11:18:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:44.190 11:18:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:44.190 11:18:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.190 11:18:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.190 11:18:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 ************************************ 00:07:44.190 START TEST nvmf_example 00:07:44.190 ************************************ 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:44.190 * Looking for test storage... 00:07:44.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.190 11:18:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.762 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:50.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:50.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:50.763 Found net devices under 0000:86:00.0: cvl_0_0 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:50.763 Found net devices under 0000:86:00.1: cvl_0_1 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:50.763 00:07:50.763 --- 10.0.0.2 ping statistics --- 00:07:50.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.763 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:50.763 00:07:50.763 --- 10.0.0.1 ping statistics --- 00:07:50.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.763 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=445575 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 445575 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 445575 ']' 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.763 11:18:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.763 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:51.022 11:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:51.022 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.283 Initializing NVMe Controllers 00:08:03.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:03.283 Initialization complete. Launching workers. 00:08:03.283 ======================================================== 00:08:03.283 Latency(us) 00:08:03.283 Device Information : IOPS MiB/s Average min max 00:08:03.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18351.83 71.69 3487.13 691.37 15473.77 00:08:03.283 ======================================================== 00:08:03.283 Total : 18351.83 71.69 3487.13 691.37 15473.77 00:08:03.283 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.283 rmmod nvme_tcp 00:08:03.283 rmmod nvme_fabrics 00:08:03.283 rmmod nvme_keyring 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 445575 ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 445575 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 445575 ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 445575 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 445575 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 445575' 00:08:03.283 killing process with pid 445575 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 445575 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 445575 00:08:03.283 nvmf threads initialize successfully 00:08:03.283 bdev subsystem init successfully 00:08:03.283 created a nvmf target service 00:08:03.283 create targets's poll groups done 00:08:03.283 all subsystems of target started 00:08:03.283 nvmf target is running 00:08:03.283 all subsystems of target stopped 00:08:03.283 destroy targets's poll groups done 00:08:03.283 destroyed the nvmf target service 00:08:03.283 bdev subsystem finish successfully 00:08:03.283 nvmf threads destroy successfully 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.283 11:18:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.542 00:08:03.542 real 0m19.453s 00:08:03.542 user 0m45.721s 00:08:03.542 sys 0m5.801s 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.542 11:18:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.542 ************************************ 00:08:03.542 END TEST nvmf_example 00:08:03.542 ************************************ 00:08:03.542 11:18:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.542 11:18:47 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.542 11:18:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.542 11:18:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.542 11:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.804 ************************************ 00:08:03.804 START TEST nvmf_filesystem 00:08:03.804 ************************************ 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.804 * Looking for test storage... 00:08:03.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:03.804 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:03.805 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:03.805 #define SPDK_CONFIG_H 00:08:03.805 #define SPDK_CONFIG_APPS 1 00:08:03.805 #define SPDK_CONFIG_ARCH native 00:08:03.805 #undef SPDK_CONFIG_ASAN 00:08:03.805 #undef SPDK_CONFIG_AVAHI 00:08:03.805 #undef SPDK_CONFIG_CET 00:08:03.805 #define SPDK_CONFIG_COVERAGE 1 00:08:03.805 #define SPDK_CONFIG_CROSS_PREFIX 00:08:03.805 #undef SPDK_CONFIG_CRYPTO 00:08:03.805 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:03.805 #undef SPDK_CONFIG_CUSTOMOCF 00:08:03.805 #undef SPDK_CONFIG_DAOS 00:08:03.805 #define SPDK_CONFIG_DAOS_DIR 00:08:03.805 #define SPDK_CONFIG_DEBUG 1 00:08:03.805 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:03.805 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:03.805 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:03.805 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:03.805 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:03.805 #undef SPDK_CONFIG_DPDK_UADK 00:08:03.805 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.805 #define SPDK_CONFIG_EXAMPLES 1 00:08:03.805 #undef SPDK_CONFIG_FC 00:08:03.805 #define SPDK_CONFIG_FC_PATH 00:08:03.805 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:03.805 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:03.805 #undef SPDK_CONFIG_FUSE 00:08:03.805 #undef SPDK_CONFIG_FUZZER 00:08:03.805 #define SPDK_CONFIG_FUZZER_LIB 00:08:03.805 #undef SPDK_CONFIG_GOLANG 00:08:03.805 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:03.805 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:03.805 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:03.805 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:03.805 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:03.805 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:03.806 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:03.806 #define SPDK_CONFIG_IDXD 1 00:08:03.806 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:03.806 #undef SPDK_CONFIG_IPSEC_MB 00:08:03.806 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:03.806 #define SPDK_CONFIG_ISAL 1 00:08:03.806 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:03.806 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:03.806 #define SPDK_CONFIG_LIBDIR 00:08:03.806 #undef SPDK_CONFIG_LTO 00:08:03.806 #define SPDK_CONFIG_MAX_LCORES 128 00:08:03.806 #define SPDK_CONFIG_NVME_CUSE 1 00:08:03.806 #undef SPDK_CONFIG_OCF 00:08:03.806 #define SPDK_CONFIG_OCF_PATH 00:08:03.806 #define SPDK_CONFIG_OPENSSL_PATH 00:08:03.806 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:03.806 #define SPDK_CONFIG_PGO_DIR 00:08:03.806 #undef SPDK_CONFIG_PGO_USE 00:08:03.806 #define SPDK_CONFIG_PREFIX /usr/local 00:08:03.806 #undef SPDK_CONFIG_RAID5F 00:08:03.806 #undef SPDK_CONFIG_RBD 00:08:03.806 #define SPDK_CONFIG_RDMA 1 00:08:03.806 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:03.806 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:03.806 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:03.806 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:03.806 #define SPDK_CONFIG_SHARED 1 00:08:03.806 #undef SPDK_CONFIG_SMA 00:08:03.806 #define SPDK_CONFIG_TESTS 1 00:08:03.806 #undef SPDK_CONFIG_TSAN 00:08:03.806 #define SPDK_CONFIG_UBLK 1 00:08:03.806 #define SPDK_CONFIG_UBSAN 1 00:08:03.806 #undef SPDK_CONFIG_UNIT_TESTS 00:08:03.806 #undef SPDK_CONFIG_URING 00:08:03.806 #define SPDK_CONFIG_URING_PATH 00:08:03.806 #undef SPDK_CONFIG_URING_ZNS 00:08:03.806 #undef SPDK_CONFIG_USDT 00:08:03.806 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:03.806 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:03.806 #define SPDK_CONFIG_VFIO_USER 1 00:08:03.806 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:03.806 #define SPDK_CONFIG_VHOST 1 00:08:03.806 #define SPDK_CONFIG_VIRTIO 1 00:08:03.806 #undef SPDK_CONFIG_VTUNE 00:08:03.806 #define SPDK_CONFIG_VTUNE_DIR 00:08:03.806 #define SPDK_CONFIG_WERROR 1 00:08:03.806 #define SPDK_CONFIG_WPDK_DIR 00:08:03.806 #undef SPDK_CONFIG_XNVME 00:08:03.806 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:03.806 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.807 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 447990 ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 447990 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.lLgDAW 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.lLgDAW/tests/target /tmp/spdk.lLgDAW 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189536763904 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6437535744 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986568192 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=581632 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:03.808 * Looking for test storage... 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189536763904 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8652128256 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.808 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.068 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.069 11:18:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:10.636 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:10.636 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:10.636 Found net devices under 0000:86:00.0: cvl_0_0 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:10.636 Found net devices under 0000:86:00.1: cvl_0_1 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.636 11:18:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:08:10.636 00:08:10.636 --- 10.0.0.2 ping statistics --- 00:08:10.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.636 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:10.636 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:10.637 00:08:10.637 --- 10.0.0.1 ping statistics --- 00:08:10.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.637 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.637 ************************************ 00:08:10.637 START TEST nvmf_filesystem_no_in_capsule 00:08:10.637 ************************************ 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=451015 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 451015 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 451015 ']' 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.637 11:18:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.637 [2024-07-15 11:18:53.359321] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:10.637 [2024-07-15 11:18:53.359369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.637 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.637 [2024-07-15 11:18:53.431441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.637 [2024-07-15 11:18:53.514046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.637 [2024-07-15 11:18:53.514082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.637 [2024-07-15 11:18:53.514089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.637 [2024-07-15 11:18:53.514095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.637 [2024-07-15 11:18:53.514100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.637 [2024-07-15 11:18:53.514144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.637 [2024-07-15 11:18:53.514263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.637 [2024-07-15 11:18:53.514316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.637 [2024-07-15 11:18:53.514317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.637 [2024-07-15 11:18:54.217220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.637 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 Malloc1 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 [2024-07-15 11:18:54.363306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:10.896 { 00:08:10.896 "name": "Malloc1", 00:08:10.896 "aliases": [ 00:08:10.896 "345fa8cf-4a3d-4a76-8187-e40c9d599032" 00:08:10.896 ], 00:08:10.896 "product_name": "Malloc disk", 00:08:10.896 "block_size": 512, 00:08:10.896 "num_blocks": 1048576, 00:08:10.896 "uuid": "345fa8cf-4a3d-4a76-8187-e40c9d599032", 00:08:10.896 "assigned_rate_limits": { 00:08:10.896 "rw_ios_per_sec": 0, 00:08:10.896 "rw_mbytes_per_sec": 0, 00:08:10.896 "r_mbytes_per_sec": 0, 00:08:10.896 "w_mbytes_per_sec": 0 00:08:10.896 }, 00:08:10.896 "claimed": true, 00:08:10.896 "claim_type": "exclusive_write", 00:08:10.896 "zoned": false, 00:08:10.896 "supported_io_types": { 00:08:10.896 "read": true, 00:08:10.896 "write": true, 00:08:10.896 "unmap": true, 00:08:10.896 "flush": true, 00:08:10.896 "reset": true, 00:08:10.896 "nvme_admin": false, 00:08:10.896 "nvme_io": false, 00:08:10.896 "nvme_io_md": false, 00:08:10.896 "write_zeroes": true, 00:08:10.896 "zcopy": true, 00:08:10.896 "get_zone_info": false, 00:08:10.896 "zone_management": false, 00:08:10.896 "zone_append": false, 00:08:10.896 "compare": false, 00:08:10.896 "compare_and_write": false, 00:08:10.896 "abort": true, 00:08:10.896 "seek_hole": false, 00:08:10.896 "seek_data": false, 00:08:10.896 "copy": true, 00:08:10.896 "nvme_iov_md": false 00:08:10.896 }, 00:08:10.896 "memory_domains": [ 00:08:10.896 { 00:08:10.896 "dma_device_id": "system", 00:08:10.896 "dma_device_type": 1 00:08:10.896 }, 00:08:10.896 { 00:08:10.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.896 "dma_device_type": 2 00:08:10.896 } 00:08:10.896 ], 00:08:10.896 "driver_specific": {} 00:08:10.896 } 00:08:10.896 ]' 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:10.896 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:10.897 11:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.272 11:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.272 11:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:12.272 11:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.272 11:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:12.272 11:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.173 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.432 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:14.432 11:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:15.370 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:15.370 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.370 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:15.370 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.370 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.629 ************************************ 00:08:15.629 START TEST filesystem_ext4 00:08:15.629 ************************************ 00:08:15.629 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.629 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.629 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.629 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:15.630 11:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.630 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.630 Discarding device blocks: 0/522240 done 00:08:15.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:15.630 Filesystem UUID: 7c9224ea-db06-492c-b896-74e4cc830215 00:08:15.630 Superblock backups stored on blocks: 00:08:15.630 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:15.630 00:08:15.630 Allocating group tables: 0/64 done 00:08:15.630 Writing inode tables: 0/64 done 00:08:15.888 Creating journal (8192 blocks): done 00:08:16.714 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:16.714 00:08:16.714 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:16.714 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.973 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 451015 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.233 00:08:17.233 real 0m1.681s 00:08:17.233 user 0m0.028s 00:08:17.233 sys 0m0.060s 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:17.233 ************************************ 00:08:17.233 END TEST filesystem_ext4 00:08:17.233 ************************************ 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.233 ************************************ 00:08:17.233 START TEST filesystem_btrfs 00:08:17.233 ************************************ 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:17.233 11:19:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.492 btrfs-progs v6.6.2 00:08:17.492 See https://btrfs.readthedocs.io for more information. 00:08:17.492 00:08:17.492 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.492 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.492 this does not affect your deployments: 00:08:17.492 - DUP for metadata (-m dup) 00:08:17.492 - enabled no-holes (-O no-holes) 00:08:17.492 - enabled free-space-tree (-R free-space-tree) 00:08:17.492 00:08:17.492 Label: (null) 00:08:17.492 UUID: 93fabad0-105b-43ae-a8af-db95bd5e5057 00:08:17.492 Node size: 16384 00:08:17.492 Sector size: 4096 00:08:17.492 Filesystem size: 510.00MiB 00:08:17.492 Block group profiles: 00:08:17.492 Data: single 8.00MiB 00:08:17.492 Metadata: DUP 32.00MiB 00:08:17.492 System: DUP 8.00MiB 00:08:17.492 SSD detected: yes 00:08:17.492 Zoned device: no 00:08:17.492 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.492 Runtime features: free-space-tree 00:08:17.492 Checksum: crc32c 00:08:17.492 Number of devices: 1 00:08:17.492 Devices: 00:08:17.492 ID SIZE PATH 00:08:17.492 1 510.00MiB /dev/nvme0n1p1 00:08:17.492 00:08:17.492 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:17.492 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 451015 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.060 00:08:18.060 real 0m0.679s 00:08:18.060 user 0m0.031s 00:08:18.060 sys 0m0.117s 00:08:18.060 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:18.061 ************************************ 00:08:18.061 END TEST filesystem_btrfs 00:08:18.061 ************************************ 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.061 ************************************ 00:08:18.061 START TEST filesystem_xfs 00:08:18.061 ************************************ 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:18.061 11:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.061 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.061 = sectsz=512 attr=2, projid32bit=1 00:08:18.061 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.061 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.061 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.061 = sunit=0 swidth=0 blks 00:08:18.061 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.061 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.061 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.061 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:18.997 Discarding blocks...Done. 00:08:18.997 11:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:18.997 11:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 451015 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.542 00:08:21.542 real 0m3.379s 00:08:21.542 user 0m0.023s 00:08:21.542 sys 0m0.070s 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.542 ************************************ 00:08:21.542 END TEST filesystem_xfs 00:08:21.542 ************************************ 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.542 11:19:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:21.800 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 451015 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 451015 ']' 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 451015 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451015 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451015' 00:08:21.801 killing process with pid 451015 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 451015 00:08:21.801 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 451015 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.367 00:08:22.367 real 0m12.405s 00:08:22.367 user 0m48.691s 00:08:22.367 sys 0m1.182s 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.367 ************************************ 00:08:22.367 END TEST nvmf_filesystem_no_in_capsule 00:08:22.367 ************************************ 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.367 ************************************ 00:08:22.367 START TEST nvmf_filesystem_in_capsule 00:08:22.367 ************************************ 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=453311 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 453311 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 453311 ']' 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.367 11:19:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.367 [2024-07-15 11:19:05.837563] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:22.367 [2024-07-15 11:19:05.837605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.367 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.367 [2024-07-15 11:19:05.912920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.626 [2024-07-15 11:19:05.987197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.626 [2024-07-15 11:19:05.987242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.626 [2024-07-15 11:19:05.987249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.626 [2024-07-15 11:19:05.987255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.626 [2024-07-15 11:19:05.987261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.626 [2024-07-15 11:19:05.987321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.626 [2024-07-15 11:19:05.987452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.626 [2024-07-15 11:19:05.987557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.626 [2024-07-15 11:19:05.987558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.194 [2024-07-15 11:19:06.691289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.194 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 Malloc1 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 [2024-07-15 11:19:06.840404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:23.454 { 00:08:23.454 "name": "Malloc1", 00:08:23.454 "aliases": [ 00:08:23.454 "5c7ee383-f967-4775-9386-37dd57f38183" 00:08:23.454 ], 00:08:23.454 "product_name": "Malloc disk", 00:08:23.454 "block_size": 512, 00:08:23.454 "num_blocks": 1048576, 00:08:23.454 "uuid": "5c7ee383-f967-4775-9386-37dd57f38183", 00:08:23.454 "assigned_rate_limits": { 00:08:23.454 "rw_ios_per_sec": 0, 00:08:23.454 "rw_mbytes_per_sec": 0, 00:08:23.454 "r_mbytes_per_sec": 0, 00:08:23.454 "w_mbytes_per_sec": 0 00:08:23.454 }, 00:08:23.454 "claimed": true, 00:08:23.454 "claim_type": "exclusive_write", 00:08:23.454 "zoned": false, 00:08:23.454 "supported_io_types": { 00:08:23.454 "read": true, 00:08:23.454 "write": true, 00:08:23.454 "unmap": true, 00:08:23.454 "flush": true, 00:08:23.454 "reset": true, 00:08:23.454 "nvme_admin": false, 00:08:23.454 "nvme_io": false, 00:08:23.454 "nvme_io_md": false, 00:08:23.454 "write_zeroes": true, 00:08:23.454 "zcopy": true, 00:08:23.454 "get_zone_info": false, 00:08:23.454 "zone_management": false, 00:08:23.454 "zone_append": false, 00:08:23.454 "compare": false, 00:08:23.454 "compare_and_write": false, 00:08:23.454 "abort": true, 00:08:23.454 "seek_hole": false, 00:08:23.454 "seek_data": false, 00:08:23.454 "copy": true, 00:08:23.454 "nvme_iov_md": false 00:08:23.454 }, 00:08:23.454 "memory_domains": [ 00:08:23.454 { 00:08:23.454 "dma_device_id": "system", 00:08:23.454 "dma_device_type": 1 00:08:23.454 }, 00:08:23.454 { 00:08:23.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.454 "dma_device_type": 2 00:08:23.454 } 00:08:23.454 ], 00:08:23.454 "driver_specific": {} 00:08:23.454 } 00:08:23.454 ]' 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:23.454 11:19:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.831 11:19:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.831 11:19:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.831 11:19:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.831 11:19:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:24.831 11:19:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.806 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:27.065 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:27.323 11:19:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.698 ************************************ 00:08:28.698 START TEST filesystem_in_capsule_ext4 00:08:28.698 ************************************ 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:28.698 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:28.699 11:19:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:28.699 mke2fs 1.46.5 (30-Dec-2021) 00:08:28.699 Discarding device blocks: 0/522240 done 00:08:28.699 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:28.699 Filesystem UUID: deab91f2-abbb-4938-be09-3cdd9706aa5a 00:08:28.699 Superblock backups stored on blocks: 00:08:28.699 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:28.699 00:08:28.699 Allocating group tables: 0/64 done 00:08:28.699 Writing inode tables: 0/64 done 00:08:30.074 Creating journal (8192 blocks): done 00:08:30.897 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:30.897 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:30.897 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.156 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 453311 00:08:31.156 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.156 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.156 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.156 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.156 00:08:31.156 real 0m2.611s 00:08:31.157 user 0m0.028s 00:08:31.157 sys 0m0.061s 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:31.157 ************************************ 00:08:31.157 END TEST filesystem_in_capsule_ext4 00:08:31.157 ************************************ 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.157 ************************************ 00:08:31.157 START TEST filesystem_in_capsule_btrfs 00:08:31.157 ************************************ 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:31.157 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:31.416 btrfs-progs v6.6.2 00:08:31.416 See https://btrfs.readthedocs.io for more information. 00:08:31.416 00:08:31.416 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:31.416 NOTE: several default settings have changed in version 5.15, please make sure 00:08:31.416 this does not affect your deployments: 00:08:31.416 - DUP for metadata (-m dup) 00:08:31.416 - enabled no-holes (-O no-holes) 00:08:31.416 - enabled free-space-tree (-R free-space-tree) 00:08:31.416 00:08:31.416 Label: (null) 00:08:31.416 UUID: 1ea4cc3b-e05a-4c23-9769-3f54d4592cc8 00:08:31.416 Node size: 16384 00:08:31.416 Sector size: 4096 00:08:31.416 Filesystem size: 510.00MiB 00:08:31.416 Block group profiles: 00:08:31.416 Data: single 8.00MiB 00:08:31.416 Metadata: DUP 32.00MiB 00:08:31.416 System: DUP 8.00MiB 00:08:31.416 SSD detected: yes 00:08:31.416 Zoned device: no 00:08:31.416 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:31.416 Runtime features: free-space-tree 00:08:31.416 Checksum: crc32c 00:08:31.416 Number of devices: 1 00:08:31.416 Devices: 00:08:31.416 ID SIZE PATH 00:08:31.416 1 510.00MiB /dev/nvme0n1p1 00:08:31.416 00:08:31.416 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:31.416 11:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.350 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 453311 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.351 00:08:32.351 real 0m1.288s 00:08:32.351 user 0m0.021s 00:08:32.351 sys 0m0.127s 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.351 ************************************ 00:08:32.351 END TEST filesystem_in_capsule_btrfs 00:08:32.351 ************************************ 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.351 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.609 ************************************ 00:08:32.609 START TEST filesystem_in_capsule_xfs 00:08:32.609 ************************************ 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:32.609 11:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.609 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.609 = sectsz=512 attr=2, projid32bit=1 00:08:32.609 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.609 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.609 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.609 = sunit=0 swidth=0 blks 00:08:32.609 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.609 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.609 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.609 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.544 Discarding blocks...Done. 00:08:33.544 11:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:33.544 11:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 453311 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.077 00:08:36.077 real 0m3.456s 00:08:36.077 user 0m0.018s 00:08:36.077 sys 0m0.076s 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:36.077 ************************************ 00:08:36.077 END TEST filesystem_in_capsule_xfs 00:08:36.077 ************************************ 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:36.077 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 453311 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 453311 ']' 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 453311 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.336 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453311 00:08:36.594 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.594 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.594 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453311' 00:08:36.594 killing process with pid 453311 00:08:36.594 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 453311 00:08:36.594 11:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 453311 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:36.854 00:08:36.854 real 0m14.513s 00:08:36.854 user 0m57.050s 00:08:36.854 sys 0m1.245s 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.854 ************************************ 00:08:36.854 END TEST nvmf_filesystem_in_capsule 00:08:36.854 ************************************ 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.854 rmmod nvme_tcp 00:08:36.854 rmmod nvme_fabrics 00:08:36.854 rmmod nvme_keyring 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.854 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.390 11:19:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.390 00:08:39.390 real 0m35.323s 00:08:39.390 user 1m47.506s 00:08:39.390 sys 0m7.061s 00:08:39.390 11:19:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.390 11:19:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:39.390 ************************************ 00:08:39.390 END TEST nvmf_filesystem 00:08:39.390 ************************************ 00:08:39.390 11:19:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:39.390 11:19:22 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.390 11:19:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.390 11:19:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.391 11:19:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.391 ************************************ 00:08:39.391 START TEST nvmf_target_discovery 00:08:39.391 ************************************ 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.391 * Looking for test storage... 00:08:39.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.391 11:19:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:44.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:44.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:44.667 Found net devices under 0000:86:00.0: cvl_0_0 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:44.667 Found net devices under 0000:86:00.1: cvl_0_1 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.667 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:44.927 00:08:44.927 --- 10.0.0.2 ping statistics --- 00:08:44.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.927 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:44.927 00:08:44.927 --- 10.0.0.1 ping statistics --- 00:08:44.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.927 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:44.927 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.928 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=459363 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 459363 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 459363 ']' 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.185 11:19:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.185 [2024-07-15 11:19:28.568028] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:45.185 [2024-07-15 11:19:28.568084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.185 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.186 [2024-07-15 11:19:28.639777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.186 [2024-07-15 11:19:28.721877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.186 [2024-07-15 11:19:28.721914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.186 [2024-07-15 11:19:28.721921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.186 [2024-07-15 11:19:28.721927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.186 [2024-07-15 11:19:28.721932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.186 [2024-07-15 11:19:28.721978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.186 [2024-07-15 11:19:28.722007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.186 [2024-07-15 11:19:28.722111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.186 [2024-07-15 11:19:28.722112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.119 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 [2024-07-15 11:19:29.426153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 Null1 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 [2024-07-15 11:19:29.471697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 Null2 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 Null3 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 Null4 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.120 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:46.379 00:08:46.379 Discovery Log Number of Records 6, Generation counter 6 00:08:46.379 =====Discovery Log Entry 0====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: current discovery subsystem 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4420 00:08:46.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: explicit discovery connections, duplicate discovery information 00:08:46.379 sectype: none 00:08:46.379 =====Discovery Log Entry 1====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: nvme subsystem 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4420 00:08:46.379 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: none 00:08:46.379 sectype: none 00:08:46.379 =====Discovery Log Entry 2====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: nvme subsystem 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4420 00:08:46.379 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: none 00:08:46.379 sectype: none 00:08:46.379 =====Discovery Log Entry 3====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: nvme subsystem 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4420 00:08:46.379 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: none 00:08:46.379 sectype: none 00:08:46.379 =====Discovery Log Entry 4====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: nvme subsystem 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4420 00:08:46.379 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: none 00:08:46.379 sectype: none 00:08:46.379 =====Discovery Log Entry 5====== 00:08:46.379 trtype: tcp 00:08:46.379 adrfam: ipv4 00:08:46.379 subtype: discovery subsystem referral 00:08:46.379 treq: not required 00:08:46.379 portid: 0 00:08:46.379 trsvcid: 4430 00:08:46.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:46.379 traddr: 10.0.0.2 00:08:46.379 eflags: none 00:08:46.379 sectype: none 00:08:46.379 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:46.379 Perform nvmf subsystem discovery via RPC 00:08:46.379 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:46.379 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.379 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.379 [ 00:08:46.379 { 00:08:46.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:46.379 "subtype": "Discovery", 00:08:46.379 "listen_addresses": [ 00:08:46.379 { 00:08:46.379 "trtype": "TCP", 00:08:46.379 "adrfam": "IPv4", 00:08:46.379 "traddr": "10.0.0.2", 00:08:46.379 "trsvcid": "4420" 00:08:46.379 } 00:08:46.379 ], 00:08:46.379 "allow_any_host": true, 00:08:46.379 "hosts": [] 00:08:46.379 }, 00:08:46.379 { 00:08:46.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.379 "subtype": "NVMe", 00:08:46.380 "listen_addresses": [ 00:08:46.380 { 00:08:46.380 "trtype": "TCP", 00:08:46.380 "adrfam": "IPv4", 00:08:46.380 "traddr": "10.0.0.2", 00:08:46.380 "trsvcid": "4420" 00:08:46.380 } 00:08:46.380 ], 00:08:46.380 "allow_any_host": true, 00:08:46.380 "hosts": [], 00:08:46.380 "serial_number": "SPDK00000000000001", 00:08:46.380 "model_number": "SPDK bdev Controller", 00:08:46.380 "max_namespaces": 32, 00:08:46.380 "min_cntlid": 1, 00:08:46.380 "max_cntlid": 65519, 00:08:46.380 "namespaces": [ 00:08:46.380 { 00:08:46.380 "nsid": 1, 00:08:46.380 "bdev_name": "Null1", 00:08:46.380 "name": "Null1", 00:08:46.380 "nguid": "8DB6A12E162344F6887379E903903BD8", 00:08:46.380 "uuid": "8db6a12e-1623-44f6-8873-79e903903bd8" 00:08:46.380 } 00:08:46.380 ] 00:08:46.380 }, 00:08:46.380 { 00:08:46.380 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:46.380 "subtype": "NVMe", 00:08:46.380 "listen_addresses": [ 00:08:46.380 { 00:08:46.380 "trtype": "TCP", 00:08:46.380 "adrfam": "IPv4", 00:08:46.380 "traddr": "10.0.0.2", 00:08:46.380 "trsvcid": "4420" 00:08:46.380 } 00:08:46.380 ], 00:08:46.380 "allow_any_host": true, 00:08:46.380 "hosts": [], 00:08:46.380 "serial_number": "SPDK00000000000002", 00:08:46.380 "model_number": "SPDK bdev Controller", 00:08:46.380 "max_namespaces": 32, 00:08:46.380 "min_cntlid": 1, 00:08:46.380 "max_cntlid": 65519, 00:08:46.380 "namespaces": [ 00:08:46.380 { 00:08:46.380 "nsid": 1, 00:08:46.380 "bdev_name": "Null2", 00:08:46.380 "name": "Null2", 00:08:46.380 "nguid": "99CC7555B0674A42B7E264641C109ED4", 00:08:46.380 "uuid": "99cc7555-b067-4a42-b7e2-64641c109ed4" 00:08:46.380 } 00:08:46.380 ] 00:08:46.380 }, 00:08:46.380 { 00:08:46.380 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:46.380 "subtype": "NVMe", 00:08:46.380 "listen_addresses": [ 00:08:46.380 { 00:08:46.380 "trtype": "TCP", 00:08:46.380 "adrfam": "IPv4", 00:08:46.380 "traddr": "10.0.0.2", 00:08:46.380 "trsvcid": "4420" 00:08:46.380 } 00:08:46.380 ], 00:08:46.380 "allow_any_host": true, 00:08:46.380 "hosts": [], 00:08:46.380 "serial_number": "SPDK00000000000003", 00:08:46.380 "model_number": "SPDK bdev Controller", 00:08:46.380 "max_namespaces": 32, 00:08:46.380 "min_cntlid": 1, 00:08:46.380 "max_cntlid": 65519, 00:08:46.380 "namespaces": [ 00:08:46.380 { 00:08:46.380 "nsid": 1, 00:08:46.380 "bdev_name": "Null3", 00:08:46.380 "name": "Null3", 00:08:46.380 "nguid": "16FD2660CABE43B59FB4634BC5D566CF", 00:08:46.380 "uuid": "16fd2660-cabe-43b5-9fb4-634bc5d566cf" 00:08:46.380 } 00:08:46.380 ] 00:08:46.380 }, 00:08:46.380 { 00:08:46.380 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:46.380 "subtype": "NVMe", 00:08:46.380 "listen_addresses": [ 00:08:46.380 { 00:08:46.380 "trtype": "TCP", 00:08:46.380 "adrfam": "IPv4", 00:08:46.380 "traddr": "10.0.0.2", 00:08:46.380 "trsvcid": "4420" 00:08:46.380 } 00:08:46.380 ], 00:08:46.380 "allow_any_host": true, 00:08:46.380 "hosts": [], 00:08:46.380 "serial_number": "SPDK00000000000004", 00:08:46.380 "model_number": "SPDK bdev Controller", 00:08:46.380 "max_namespaces": 32, 00:08:46.380 "min_cntlid": 1, 00:08:46.380 "max_cntlid": 65519, 00:08:46.380 "namespaces": [ 00:08:46.380 { 00:08:46.380 "nsid": 1, 00:08:46.380 "bdev_name": "Null4", 00:08:46.380 "name": "Null4", 00:08:46.380 "nguid": "D9CE3F56F0874CE484545CADC774BC6D", 00:08:46.380 "uuid": "d9ce3f56-f087-4ce4-8454-5cadc774bc6d" 00:08:46.380 } 00:08:46.380 ] 00:08:46.380 } 00:08:46.380 ] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:46.380 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.381 rmmod nvme_tcp 00:08:46.381 rmmod nvme_fabrics 00:08:46.381 rmmod nvme_keyring 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 459363 ']' 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 459363 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 459363 ']' 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 459363 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:46.381 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.639 11:19:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459363 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459363' 00:08:46.639 killing process with pid 459363 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 459363 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 459363 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.639 11:19:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.175 11:19:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.175 00:08:49.175 real 0m9.723s 00:08:49.175 user 0m7.718s 00:08:49.175 sys 0m4.732s 00:08:49.175 11:19:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.175 11:19:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:49.175 ************************************ 00:08:49.175 END TEST nvmf_target_discovery 00:08:49.175 ************************************ 00:08:49.175 11:19:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:49.175 11:19:32 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.175 11:19:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.175 11:19:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.175 11:19:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:49.175 ************************************ 00:08:49.175 START TEST nvmf_referrals 00:08:49.175 ************************************ 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.175 * Looking for test storage... 00:08:49.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.175 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.176 11:19:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:54.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:54.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.518 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:54.519 Found net devices under 0000:86:00.0: cvl_0_0 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:54.519 Found net devices under 0000:86:00.1: cvl_0_1 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.519 11:19:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.519 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:54.778 00:08:54.778 --- 10.0.0.2 ping statistics --- 00:08:54.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.778 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:08:54.778 00:08:54.778 --- 10.0.0.1 ping statistics --- 00:08:54.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.778 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=463146 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 463146 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 463146 ']' 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.778 11:19:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:54.778 [2024-07-15 11:19:38.322750] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:54.778 [2024-07-15 11:19:38.322797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.037 [2024-07-15 11:19:38.393568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.037 [2024-07-15 11:19:38.473121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.037 [2024-07-15 11:19:38.473158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.037 [2024-07-15 11:19:38.473165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.037 [2024-07-15 11:19:38.473171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.037 [2024-07-15 11:19:38.473176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.037 [2024-07-15 11:19:38.473240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.037 [2024-07-15 11:19:38.473316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.037 [2024-07-15 11:19:38.473425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.037 [2024-07-15 11:19:38.473426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.604 [2024-07-15 11:19:39.166142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.604 [2024-07-15 11:19:39.179634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.604 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.862 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.121 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 11:19:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.380 11:19:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.638 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.897 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.156 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.415 rmmod nvme_tcp 00:08:57.415 rmmod nvme_fabrics 00:08:57.415 rmmod nvme_keyring 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 463146 ']' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 463146 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 463146 ']' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 463146 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 463146 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 463146' 00:08:57.415 killing process with pid 463146 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 463146 00:08:57.415 11:19:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 463146 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.673 11:19:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.674 11:19:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.206 11:19:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.206 00:09:00.206 real 0m10.861s 00:09:00.206 user 0m13.040s 00:09:00.206 sys 0m5.043s 00:09:00.206 11:19:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.206 11:19:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 ************************************ 00:09:00.206 END TEST nvmf_referrals 00:09:00.206 ************************************ 00:09:00.206 11:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.206 11:19:43 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.206 11:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.206 11:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.206 11:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 ************************************ 00:09:00.206 START TEST nvmf_connect_disconnect 00:09:00.206 ************************************ 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.206 * Looking for test storage... 00:09:00.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.206 11:19:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.476 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.476 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.476 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.477 11:19:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.477 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.477 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.477 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.477 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:09:05.736 00:09:05.736 --- 10.0.0.2 ping statistics --- 00:09:05.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.736 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:09:05.736 00:09:05.736 --- 10.0.0.1 ping statistics --- 00:09:05.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.736 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=467226 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 467226 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 467226 ']' 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.736 11:19:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:05.736 [2024-07-15 11:19:49.250312] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:05.736 [2024-07-15 11:19:49.250355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.736 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.736 [2024-07-15 11:19:49.318825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.995 [2024-07-15 11:19:49.398742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.995 [2024-07-15 11:19:49.398776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.995 [2024-07-15 11:19:49.398782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.995 [2024-07-15 11:19:49.398788] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.995 [2024-07-15 11:19:49.398792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.995 [2024-07-15 11:19:49.398900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.995 [2024-07-15 11:19:49.399007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.995 [2024-07-15 11:19:49.399111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.995 [2024-07-15 11:19:49.399113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.562 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.562 [2024-07-15 11:19:50.105264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.563 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:06.821 [2024-07-15 11:19:50.157291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:06.821 11:19:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:10.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.311 rmmod nvme_tcp 00:09:23.311 rmmod nvme_fabrics 00:09:23.311 rmmod nvme_keyring 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 467226 ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 467226 ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467226' 00:09:23.311 killing process with pid 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 467226 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.311 11:20:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.211 11:20:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.211 00:09:25.211 real 0m25.516s 00:09:25.211 user 1m10.529s 00:09:25.211 sys 0m5.515s 00:09:25.211 11:20:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.211 11:20:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:25.211 ************************************ 00:09:25.211 END TEST nvmf_connect_disconnect 00:09:25.211 ************************************ 00:09:25.470 11:20:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:25.470 11:20:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:25.470 11:20:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.470 11:20:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.470 11:20:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.470 ************************************ 00:09:25.470 START TEST nvmf_multitarget 00:09:25.470 ************************************ 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:25.470 * Looking for test storage... 00:09:25.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.470 11:20:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.035 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.035 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:32.035 00:09:32.035 --- 10.0.0.2 ping statistics --- 00:09:32.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.035 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:32.035 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:32.035 00:09:32.035 --- 10.0.0.1 ping statistics --- 00:09:32.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.036 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=473624 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 473624 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 473624 ']' 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.036 11:20:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:32.036 [2024-07-15 11:20:14.832945] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:32.036 [2024-07-15 11:20:14.832992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.036 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.036 [2024-07-15 11:20:14.901726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.036 [2024-07-15 11:20:14.982624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.036 [2024-07-15 11:20:14.982658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.036 [2024-07-15 11:20:14.982665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.036 [2024-07-15 11:20:14.982671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.036 [2024-07-15 11:20:14.982676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.036 [2024-07-15 11:20:14.982725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.036 [2024-07-15 11:20:14.982750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.036 [2024-07-15 11:20:14.982880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.036 [2024-07-15 11:20:14.982881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:32.294 "nvmf_tgt_1" 00:09:32.294 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:32.552 "nvmf_tgt_2" 00:09:32.553 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:32.553 11:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:32.553 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:32.553 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:32.811 true 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:32.811 true 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.811 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.811 rmmod nvme_tcp 00:09:33.069 rmmod nvme_fabrics 00:09:33.069 rmmod nvme_keyring 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 473624 ']' 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 473624 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 473624 ']' 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 473624 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473624 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473624' 00:09:33.069 killing process with pid 473624 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 473624 00:09:33.069 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 473624 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.328 11:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.235 11:20:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.235 00:09:35.235 real 0m9.894s 00:09:35.235 user 0m9.115s 00:09:35.235 sys 0m4.855s 00:09:35.235 11:20:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.235 11:20:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:35.235 ************************************ 00:09:35.235 END TEST nvmf_multitarget 00:09:35.235 ************************************ 00:09:35.235 11:20:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.235 11:20:18 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:35.235 11:20:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.235 11:20:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.235 11:20:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.235 ************************************ 00:09:35.235 START TEST nvmf_rpc 00:09:35.235 ************************************ 00:09:35.235 11:20:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:35.495 * Looking for test storage... 00:09:35.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.495 11:20:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.069 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.069 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:09:42.069 00:09:42.069 --- 10.0.0.2 ping statistics --- 00:09:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.069 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:42.069 00:09:42.069 --- 10.0.0.1 ping statistics --- 00:09:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.069 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=477408 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 477408 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 477408 ']' 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.069 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.070 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.070 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.070 [2024-07-15 11:20:24.814274] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:42.070 [2024-07-15 11:20:24.814313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.070 [2024-07-15 11:20:24.886582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.070 [2024-07-15 11:20:24.966143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.070 [2024-07-15 11:20:24.966180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.070 [2024-07-15 11:20:24.966186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.070 [2024-07-15 11:20:24.966192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.070 [2024-07-15 11:20:24.966196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.070 [2024-07-15 11:20:24.966264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.070 [2024-07-15 11:20:24.966375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.070 [2024-07-15 11:20:24.966409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.070 [2024-07-15 11:20:24.966410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.070 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:42.329 "tick_rate": 2300000000, 00:09:42.329 "poll_groups": [ 00:09:42.329 { 00:09:42.329 "name": "nvmf_tgt_poll_group_000", 00:09:42.329 "admin_qpairs": 0, 00:09:42.329 "io_qpairs": 0, 00:09:42.329 "current_admin_qpairs": 0, 00:09:42.329 "current_io_qpairs": 0, 00:09:42.329 "pending_bdev_io": 0, 00:09:42.329 "completed_nvme_io": 0, 00:09:42.329 "transports": [] 00:09:42.329 }, 00:09:42.329 { 00:09:42.329 "name": "nvmf_tgt_poll_group_001", 00:09:42.329 "admin_qpairs": 0, 00:09:42.329 "io_qpairs": 0, 00:09:42.329 "current_admin_qpairs": 0, 00:09:42.329 "current_io_qpairs": 0, 00:09:42.329 "pending_bdev_io": 0, 00:09:42.329 "completed_nvme_io": 0, 00:09:42.329 "transports": [] 00:09:42.329 }, 00:09:42.329 { 00:09:42.329 "name": "nvmf_tgt_poll_group_002", 00:09:42.329 "admin_qpairs": 0, 00:09:42.329 "io_qpairs": 0, 00:09:42.329 "current_admin_qpairs": 0, 00:09:42.329 "current_io_qpairs": 0, 00:09:42.329 "pending_bdev_io": 0, 00:09:42.329 "completed_nvme_io": 0, 00:09:42.329 "transports": [] 00:09:42.329 }, 00:09:42.329 { 00:09:42.329 "name": "nvmf_tgt_poll_group_003", 00:09:42.329 "admin_qpairs": 0, 00:09:42.329 "io_qpairs": 0, 00:09:42.329 "current_admin_qpairs": 0, 00:09:42.329 "current_io_qpairs": 0, 00:09:42.329 "pending_bdev_io": 0, 00:09:42.329 "completed_nvme_io": 0, 00:09:42.329 "transports": [] 00:09:42.329 } 00:09:42.329 ] 00:09:42.329 }' 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:42.329 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.330 [2024-07-15 11:20:25.767539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:42.330 "tick_rate": 2300000000, 00:09:42.330 "poll_groups": [ 00:09:42.330 { 00:09:42.330 "name": "nvmf_tgt_poll_group_000", 00:09:42.330 "admin_qpairs": 0, 00:09:42.330 "io_qpairs": 0, 00:09:42.330 "current_admin_qpairs": 0, 00:09:42.330 "current_io_qpairs": 0, 00:09:42.330 "pending_bdev_io": 0, 00:09:42.330 "completed_nvme_io": 0, 00:09:42.330 "transports": [ 00:09:42.330 { 00:09:42.330 "trtype": "TCP" 00:09:42.330 } 00:09:42.330 ] 00:09:42.330 }, 00:09:42.330 { 00:09:42.330 "name": "nvmf_tgt_poll_group_001", 00:09:42.330 "admin_qpairs": 0, 00:09:42.330 "io_qpairs": 0, 00:09:42.330 "current_admin_qpairs": 0, 00:09:42.330 "current_io_qpairs": 0, 00:09:42.330 "pending_bdev_io": 0, 00:09:42.330 "completed_nvme_io": 0, 00:09:42.330 "transports": [ 00:09:42.330 { 00:09:42.330 "trtype": "TCP" 00:09:42.330 } 00:09:42.330 ] 00:09:42.330 }, 00:09:42.330 { 00:09:42.330 "name": "nvmf_tgt_poll_group_002", 00:09:42.330 "admin_qpairs": 0, 00:09:42.330 "io_qpairs": 0, 00:09:42.330 "current_admin_qpairs": 0, 00:09:42.330 "current_io_qpairs": 0, 00:09:42.330 "pending_bdev_io": 0, 00:09:42.330 "completed_nvme_io": 0, 00:09:42.330 "transports": [ 00:09:42.330 { 00:09:42.330 "trtype": "TCP" 00:09:42.330 } 00:09:42.330 ] 00:09:42.330 }, 00:09:42.330 { 00:09:42.330 "name": "nvmf_tgt_poll_group_003", 00:09:42.330 "admin_qpairs": 0, 00:09:42.330 "io_qpairs": 0, 00:09:42.330 "current_admin_qpairs": 0, 00:09:42.330 "current_io_qpairs": 0, 00:09:42.330 "pending_bdev_io": 0, 00:09:42.330 "completed_nvme_io": 0, 00:09:42.330 "transports": [ 00:09:42.330 { 00:09:42.330 "trtype": "TCP" 00:09:42.330 } 00:09:42.330 ] 00:09:42.330 } 00:09:42.330 ] 00:09:42.330 }' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.330 Malloc1 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.330 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.589 [2024-07-15 11:20:25.935608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:42.589 [2024-07-15 11:20:25.963982] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:09:42.589 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:42.589 could not add new controller: failed to write to nvme-fabrics device 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:42.589 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.590 11:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.590 11:20:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:43.588 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:43.588 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:43.588 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.588 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:43.588 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.120 [2024-07-15 11:20:29.396817] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:09:46.120 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:46.120 could not add new controller: failed to write to nvme-fabrics device 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.120 11:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.056 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.056 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.056 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.056 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:47.056 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 [2024-07-15 11:20:32.772096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 11:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.588 11:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:50.521 11:20:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:50.521 11:20:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:50.521 11:20:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.521 11:20:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:50.521 11:20:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:52.422 11:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 [2024-07-15 11:20:36.102132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.682 11:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.617 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.617 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:53.617 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.617 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:53.617 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 [2024-07-15 11:20:39.333258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.146 11:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.080 11:20:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.080 11:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.080 11:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.080 11:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:57.080 11:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:58.984 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 [2024-07-15 11:20:42.659372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.243 11:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.177 11:20:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.177 11:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.177 11:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.177 11:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.177 11:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.709 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 [2024-07-15 11:20:45.937276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.710 11:20:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.666 11:20:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.666 11:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.666 11:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.666 11:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:03.666 11:20:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:05.575 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.576 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 [2024-07-15 11:20:49.186348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 [2024-07-15 11:20:49.234463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 [2024-07-15 11:20:49.286628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.835 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 [2024-07-15 11:20:49.334784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 [2024-07-15 11:20:49.382943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.836 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.095 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.095 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:06.095 "tick_rate": 2300000000, 00:10:06.095 "poll_groups": [ 00:10:06.095 { 00:10:06.095 "name": "nvmf_tgt_poll_group_000", 00:10:06.095 "admin_qpairs": 2, 00:10:06.095 "io_qpairs": 168, 00:10:06.095 "current_admin_qpairs": 0, 00:10:06.095 "current_io_qpairs": 0, 00:10:06.095 "pending_bdev_io": 0, 00:10:06.095 "completed_nvme_io": 287, 00:10:06.095 "transports": [ 00:10:06.095 { 00:10:06.095 "trtype": "TCP" 00:10:06.095 } 00:10:06.095 ] 00:10:06.095 }, 00:10:06.095 { 00:10:06.095 "name": "nvmf_tgt_poll_group_001", 00:10:06.095 "admin_qpairs": 2, 00:10:06.095 "io_qpairs": 168, 00:10:06.095 "current_admin_qpairs": 0, 00:10:06.095 "current_io_qpairs": 0, 00:10:06.095 "pending_bdev_io": 0, 00:10:06.095 "completed_nvme_io": 269, 00:10:06.095 "transports": [ 00:10:06.095 { 00:10:06.095 "trtype": "TCP" 00:10:06.095 } 00:10:06.095 ] 00:10:06.095 }, 00:10:06.095 { 00:10:06.095 "name": "nvmf_tgt_poll_group_002", 00:10:06.095 "admin_qpairs": 1, 00:10:06.095 "io_qpairs": 168, 00:10:06.095 "current_admin_qpairs": 0, 00:10:06.095 "current_io_qpairs": 0, 00:10:06.095 "pending_bdev_io": 0, 00:10:06.095 "completed_nvme_io": 221, 00:10:06.095 "transports": [ 00:10:06.095 { 00:10:06.095 "trtype": "TCP" 00:10:06.095 } 00:10:06.095 ] 00:10:06.095 }, 00:10:06.095 { 00:10:06.095 "name": "nvmf_tgt_poll_group_003", 00:10:06.095 "admin_qpairs": 2, 00:10:06.096 "io_qpairs": 168, 00:10:06.096 "current_admin_qpairs": 0, 00:10:06.096 "current_io_qpairs": 0, 00:10:06.096 "pending_bdev_io": 0, 00:10:06.096 "completed_nvme_io": 245, 00:10:06.096 "transports": [ 00:10:06.096 { 00:10:06.096 "trtype": "TCP" 00:10:06.096 } 00:10:06.096 ] 00:10:06.096 } 00:10:06.096 ] 00:10:06.096 }' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.096 rmmod nvme_tcp 00:10:06.096 rmmod nvme_fabrics 00:10:06.096 rmmod nvme_keyring 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 477408 ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 477408 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 477408 ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 477408 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 477408 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 477408' 00:10:06.096 killing process with pid 477408 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 477408 00:10:06.096 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 477408 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.355 11:20:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.890 11:20:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.890 00:10:08.890 real 0m33.087s 00:10:08.890 user 1m40.831s 00:10:08.890 sys 0m6.184s 00:10:08.890 11:20:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.890 11:20:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.890 ************************************ 00:10:08.890 END TEST nvmf_rpc 00:10:08.890 ************************************ 00:10:08.890 11:20:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.890 11:20:51 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:08.890 11:20:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.890 11:20:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.890 11:20:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.890 ************************************ 00:10:08.890 START TEST nvmf_invalid 00:10:08.890 ************************************ 00:10:08.890 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:08.890 * Looking for test storage... 00:10:08.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.890 11:20:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.891 11:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.168 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.168 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.168 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:14.428 00:10:14.428 --- 10.0.0.2 ping statistics --- 00:10:14.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.428 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:10:14.428 00:10:14.428 --- 10.0.0.1 ping statistics --- 00:10:14.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.428 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=485248 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 485248 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 485248 ']' 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.428 11:20:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:14.428 [2024-07-15 11:20:57.985439] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:14.428 [2024-07-15 11:20:57.985485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.428 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.687 [2024-07-15 11:20:58.056771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.687 [2024-07-15 11:20:58.136751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.687 [2024-07-15 11:20:58.136784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.687 [2024-07-15 11:20:58.136791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.687 [2024-07-15 11:20:58.136797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.687 [2024-07-15 11:20:58.136802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.687 [2024-07-15 11:20:58.136861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.687 [2024-07-15 11:20:58.136976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.687 [2024-07-15 11:20:58.137081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.687 [2024-07-15 11:20:58.137082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:15.255 11:20:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29617 00:10:15.514 [2024-07-15 11:20:58.995540] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:15.514 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:15.514 { 00:10:15.514 "nqn": "nqn.2016-06.io.spdk:cnode29617", 00:10:15.514 "tgt_name": "foobar", 00:10:15.514 "method": "nvmf_create_subsystem", 00:10:15.514 "req_id": 1 00:10:15.514 } 00:10:15.514 Got JSON-RPC error response 00:10:15.514 response: 00:10:15.514 { 00:10:15.514 "code": -32603, 00:10:15.514 "message": "Unable to find target foobar" 00:10:15.514 }' 00:10:15.514 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:15.514 { 00:10:15.514 "nqn": "nqn.2016-06.io.spdk:cnode29617", 00:10:15.514 "tgt_name": "foobar", 00:10:15.514 "method": "nvmf_create_subsystem", 00:10:15.514 "req_id": 1 00:10:15.514 } 00:10:15.514 Got JSON-RPC error response 00:10:15.514 response: 00:10:15.514 { 00:10:15.514 "code": -32603, 00:10:15.514 "message": "Unable to find target foobar" 00:10:15.514 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:15.514 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:15.514 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3205 00:10:15.772 [2024-07-15 11:20:59.192250] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3205: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:15.772 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode3205", 00:10:15.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:15.772 "method": "nvmf_create_subsystem", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:15.772 }' 00:10:15.772 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode3205", 00:10:15.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:15.772 "method": "nvmf_create_subsystem", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:15.772 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:15.772 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:15.772 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10858 00:10:16.031 [2024-07-15 11:20:59.368782] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10858: invalid model number 'SPDK_Controller' 00:10:16.031 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:16.031 { 00:10:16.031 "nqn": "nqn.2016-06.io.spdk:cnode10858", 00:10:16.031 "model_number": "SPDK_Controller\u001f", 00:10:16.031 "method": "nvmf_create_subsystem", 00:10:16.031 "req_id": 1 00:10:16.031 } 00:10:16.031 Got JSON-RPC error response 00:10:16.031 response: 00:10:16.031 { 00:10:16.031 "code": -32602, 00:10:16.031 "message": "Invalid MN SPDK_Controller\u001f" 00:10:16.031 }' 00:10:16.031 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:16.031 { 00:10:16.031 "nqn": "nqn.2016-06.io.spdk:cnode10858", 00:10:16.031 "model_number": "SPDK_Controller\u001f", 00:10:16.031 "method": "nvmf_create_subsystem", 00:10:16.031 "req_id": 1 00:10:16.031 } 00:10:16.031 Got JSON-RPC error response 00:10:16.031 response: 00:10:16.031 { 00:10:16.032 "code": -32602, 00:10:16.032 "message": "Invalid MN SPDK_Controller\u001f" 00:10:16.032 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dI7:v2l!*C:_#rY>q3QG' 00:10:16.032 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'dI7:v2l!*C:_#rY>q3QG' nqn.2016-06.io.spdk:cnode310 00:10:16.292 [2024-07-15 11:20:59.693890] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode310: invalid serial number 'dI7:v2l!*C:_#rY>q3QG' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:16.292 { 00:10:16.292 "nqn": "nqn.2016-06.io.spdk:cnode310", 00:10:16.292 "serial_number": "dI7:v\u007f2l!*C:_#rY>q3QG", 00:10:16.292 "method": "nvmf_create_subsystem", 00:10:16.292 "req_id": 1 00:10:16.292 } 00:10:16.292 Got JSON-RPC error response 00:10:16.292 response: 00:10:16.292 { 00:10:16.292 "code": -32602, 00:10:16.292 "message": "Invalid SN dI7:v\u007f2l!*C:_#rY>q3QG" 00:10:16.292 }' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:16.292 { 00:10:16.292 "nqn": "nqn.2016-06.io.spdk:cnode310", 00:10:16.292 "serial_number": "dI7:v\u007f2l!*C:_#rY>q3QG", 00:10:16.292 "method": "nvmf_create_subsystem", 00:10:16.292 "req_id": 1 00:10:16.292 } 00:10:16.292 Got JSON-RPC error response 00:10:16.292 response: 00:10:16.292 { 00:10:16.292 "code": -32602, 00:10:16.292 "message": "Invalid SN dI7:v\u007f2l!*C:_#rY>q3QG" 00:10:16.292 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:16.292 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.293 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.293 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:16.293 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:16.293 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '5ARw}T&^1`XTmdlfGgSl'\''Hjz34AwZr3STL!T!{b:' 00:10:16.553 11:20:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '5ARw}T&^1`XTmdlfGgSl'\''Hjz34AwZr3STL!T!{b:' nqn.2016-06.io.spdk:cnode28648 00:10:16.812 [2024-07-15 11:21:00.147482] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28648: invalid model number '5ARw}T&^1`XTmdlfGgSl'Hjz34AwZr3STL!T!{b:' 00:10:16.812 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:16.812 { 00:10:16.812 "nqn": "nqn.2016-06.io.spdk:cnode28648", 00:10:16.812 "model_number": "5ARw}T&^1`\u007fXTmdlfGgSl'\''Hjz34AwZr3STL!T!{b:", 00:10:16.812 "method": "nvmf_create_subsystem", 00:10:16.812 "req_id": 1 00:10:16.812 } 00:10:16.812 Got JSON-RPC error response 00:10:16.812 response: 00:10:16.812 { 00:10:16.812 "code": -32602, 00:10:16.812 "message": "Invalid MN 5ARw}T&^1`\u007fXTmdlfGgSl'\''Hjz34AwZr3STL!T!{b:" 00:10:16.812 }' 00:10:16.812 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:16.812 { 00:10:16.812 "nqn": "nqn.2016-06.io.spdk:cnode28648", 00:10:16.812 "model_number": "5ARw}T&^1`\u007fXTmdlfGgSl'Hjz34AwZr3STL!T!{b:", 00:10:16.812 "method": "nvmf_create_subsystem", 00:10:16.812 "req_id": 1 00:10:16.812 } 00:10:16.812 Got JSON-RPC error response 00:10:16.812 response: 00:10:16.812 { 00:10:16.812 "code": -32602, 00:10:16.812 "message": "Invalid MN 5ARw}T&^1`\u007fXTmdlfGgSl'Hjz34AwZr3STL!T!{b:" 00:10:16.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:16.812 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:16.812 [2024-07-15 11:21:00.344234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.812 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:17.071 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:17.071 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:17.071 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:17.071 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:17.071 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:17.330 [2024-07-15 11:21:00.741535] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:17.330 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:17.330 { 00:10:17.330 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:17.330 "listen_address": { 00:10:17.330 "trtype": "tcp", 00:10:17.330 "traddr": "", 00:10:17.330 "trsvcid": "4421" 00:10:17.330 }, 00:10:17.330 "method": "nvmf_subsystem_remove_listener", 00:10:17.330 "req_id": 1 00:10:17.330 } 00:10:17.330 Got JSON-RPC error response 00:10:17.330 response: 00:10:17.330 { 00:10:17.330 "code": -32602, 00:10:17.330 "message": "Invalid parameters" 00:10:17.330 }' 00:10:17.330 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:17.330 { 00:10:17.330 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:17.330 "listen_address": { 00:10:17.330 "trtype": "tcp", 00:10:17.330 "traddr": "", 00:10:17.330 "trsvcid": "4421" 00:10:17.330 }, 00:10:17.330 "method": "nvmf_subsystem_remove_listener", 00:10:17.330 "req_id": 1 00:10:17.330 } 00:10:17.330 Got JSON-RPC error response 00:10:17.330 response: 00:10:17.330 { 00:10:17.330 "code": -32602, 00:10:17.330 "message": "Invalid parameters" 00:10:17.330 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:17.330 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15776 -i 0 00:10:17.588 [2024-07-15 11:21:00.934193] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15776: invalid cntlid range [0-65519] 00:10:17.588 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:17.588 { 00:10:17.588 "nqn": "nqn.2016-06.io.spdk:cnode15776", 00:10:17.588 "min_cntlid": 0, 00:10:17.588 "method": "nvmf_create_subsystem", 00:10:17.588 "req_id": 1 00:10:17.588 } 00:10:17.588 Got JSON-RPC error response 00:10:17.588 response: 00:10:17.588 { 00:10:17.588 "code": -32602, 00:10:17.588 "message": "Invalid cntlid range [0-65519]" 00:10:17.588 }' 00:10:17.588 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:17.588 { 00:10:17.588 "nqn": "nqn.2016-06.io.spdk:cnode15776", 00:10:17.588 "min_cntlid": 0, 00:10:17.588 "method": "nvmf_create_subsystem", 00:10:17.588 "req_id": 1 00:10:17.588 } 00:10:17.588 Got JSON-RPC error response 00:10:17.588 response: 00:10:17.589 { 00:10:17.589 "code": -32602, 00:10:17.589 "message": "Invalid cntlid range [0-65519]" 00:10:17.589 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:17.589 11:21:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2060 -i 65520 00:10:17.589 [2024-07-15 11:21:01.118805] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2060: invalid cntlid range [65520-65519] 00:10:17.589 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:17.589 { 00:10:17.589 "nqn": "nqn.2016-06.io.spdk:cnode2060", 00:10:17.589 "min_cntlid": 65520, 00:10:17.589 "method": "nvmf_create_subsystem", 00:10:17.589 "req_id": 1 00:10:17.589 } 00:10:17.589 Got JSON-RPC error response 00:10:17.589 response: 00:10:17.589 { 00:10:17.589 "code": -32602, 00:10:17.589 "message": "Invalid cntlid range [65520-65519]" 00:10:17.589 }' 00:10:17.589 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:17.589 { 00:10:17.589 "nqn": "nqn.2016-06.io.spdk:cnode2060", 00:10:17.589 "min_cntlid": 65520, 00:10:17.589 "method": "nvmf_create_subsystem", 00:10:17.589 "req_id": 1 00:10:17.589 } 00:10:17.589 Got JSON-RPC error response 00:10:17.589 response: 00:10:17.589 { 00:10:17.589 "code": -32602, 00:10:17.589 "message": "Invalid cntlid range [65520-65519]" 00:10:17.589 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:17.589 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15433 -I 0 00:10:17.847 [2024-07-15 11:21:01.311488] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15433: invalid cntlid range [1-0] 00:10:17.847 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:17.847 { 00:10:17.847 "nqn": "nqn.2016-06.io.spdk:cnode15433", 00:10:17.847 "max_cntlid": 0, 00:10:17.847 "method": "nvmf_create_subsystem", 00:10:17.847 "req_id": 1 00:10:17.847 } 00:10:17.847 Got JSON-RPC error response 00:10:17.847 response: 00:10:17.847 { 00:10:17.847 "code": -32602, 00:10:17.847 "message": "Invalid cntlid range [1-0]" 00:10:17.847 }' 00:10:17.847 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:17.847 { 00:10:17.847 "nqn": "nqn.2016-06.io.spdk:cnode15433", 00:10:17.847 "max_cntlid": 0, 00:10:17.847 "method": "nvmf_create_subsystem", 00:10:17.847 "req_id": 1 00:10:17.847 } 00:10:17.847 Got JSON-RPC error response 00:10:17.847 response: 00:10:17.847 { 00:10:17.847 "code": -32602, 00:10:17.847 "message": "Invalid cntlid range [1-0]" 00:10:17.847 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:17.847 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26815 -I 65520 00:10:18.106 [2024-07-15 11:21:01.504069] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26815: invalid cntlid range [1-65520] 00:10:18.106 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:18.106 { 00:10:18.106 "nqn": "nqn.2016-06.io.spdk:cnode26815", 00:10:18.106 "max_cntlid": 65520, 00:10:18.106 "method": "nvmf_create_subsystem", 00:10:18.106 "req_id": 1 00:10:18.106 } 00:10:18.106 Got JSON-RPC error response 00:10:18.106 response: 00:10:18.106 { 00:10:18.106 "code": -32602, 00:10:18.106 "message": "Invalid cntlid range [1-65520]" 00:10:18.106 }' 00:10:18.106 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:18.106 { 00:10:18.106 "nqn": "nqn.2016-06.io.spdk:cnode26815", 00:10:18.106 "max_cntlid": 65520, 00:10:18.106 "method": "nvmf_create_subsystem", 00:10:18.106 "req_id": 1 00:10:18.106 } 00:10:18.106 Got JSON-RPC error response 00:10:18.106 response: 00:10:18.106 { 00:10:18.106 "code": -32602, 00:10:18.106 "message": "Invalid cntlid range [1-65520]" 00:10:18.106 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:18.106 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3723 -i 6 -I 5 00:10:18.106 [2024-07-15 11:21:01.696688] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3723: invalid cntlid range [6-5] 00:10:18.365 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:18.365 { 00:10:18.365 "nqn": "nqn.2016-06.io.spdk:cnode3723", 00:10:18.365 "min_cntlid": 6, 00:10:18.365 "max_cntlid": 5, 00:10:18.365 "method": "nvmf_create_subsystem", 00:10:18.365 "req_id": 1 00:10:18.365 } 00:10:18.365 Got JSON-RPC error response 00:10:18.366 response: 00:10:18.366 { 00:10:18.366 "code": -32602, 00:10:18.366 "message": "Invalid cntlid range [6-5]" 00:10:18.366 }' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:18.366 { 00:10:18.366 "nqn": "nqn.2016-06.io.spdk:cnode3723", 00:10:18.366 "min_cntlid": 6, 00:10:18.366 "max_cntlid": 5, 00:10:18.366 "method": "nvmf_create_subsystem", 00:10:18.366 "req_id": 1 00:10:18.366 } 00:10:18.366 Got JSON-RPC error response 00:10:18.366 response: 00:10:18.366 { 00:10:18.366 "code": -32602, 00:10:18.366 "message": "Invalid cntlid range [6-5]" 00:10:18.366 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:18.366 { 00:10:18.366 "name": "foobar", 00:10:18.366 "method": "nvmf_delete_target", 00:10:18.366 "req_id": 1 00:10:18.366 } 00:10:18.366 Got JSON-RPC error response 00:10:18.366 response: 00:10:18.366 { 00:10:18.366 "code": -32602, 00:10:18.366 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:18.366 }' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:18.366 { 00:10:18.366 "name": "foobar", 00:10:18.366 "method": "nvmf_delete_target", 00:10:18.366 "req_id": 1 00:10:18.366 } 00:10:18.366 Got JSON-RPC error response 00:10:18.366 response: 00:10:18.366 { 00:10:18.366 "code": -32602, 00:10:18.366 "message": "The specified target doesn't exist, cannot delete it." 00:10:18.366 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.366 rmmod nvme_tcp 00:10:18.366 rmmod nvme_fabrics 00:10:18.366 rmmod nvme_keyring 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 485248 ']' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 485248 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 485248 ']' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 485248 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 485248 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 485248' 00:10:18.366 killing process with pid 485248 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 485248 00:10:18.366 11:21:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 485248 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.626 11:21:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.156 11:21:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:21.156 00:10:21.156 real 0m12.208s 00:10:21.156 user 0m19.841s 00:10:21.156 sys 0m5.368s 00:10:21.156 11:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.156 11:21:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:21.156 ************************************ 00:10:21.156 END TEST nvmf_invalid 00:10:21.156 ************************************ 00:10:21.156 11:21:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:21.156 11:21:04 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:21.156 11:21:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:21.156 11:21:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.156 11:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.156 ************************************ 00:10:21.156 START TEST nvmf_abort 00:10:21.156 ************************************ 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:21.156 * Looking for test storage... 00:10:21.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.156 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:21.157 11:21:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:26.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:26.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:26.463 Found net devices under 0000:86:00.0: cvl_0_0 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:26.463 Found net devices under 0000:86:00.1: cvl_0_1 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.463 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.464 11:21:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.464 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.464 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:26.723 00:10:26.723 --- 10.0.0.2 ping statistics --- 00:10:26.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.723 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:10:26.723 00:10:26.723 --- 10.0.0.1 ping statistics --- 00:10:26.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.723 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=489985 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 489985 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 489985 ']' 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.723 11:21:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.723 [2024-07-15 11:21:10.254166] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:26.723 [2024-07-15 11:21:10.254211] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.723 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.983 [2024-07-15 11:21:10.327395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.983 [2024-07-15 11:21:10.406134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.983 [2024-07-15 11:21:10.406168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.983 [2024-07-15 11:21:10.406175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.983 [2024-07-15 11:21:10.406181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.983 [2024-07-15 11:21:10.406186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.983 [2024-07-15 11:21:10.406304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.983 [2024-07-15 11:21:10.406428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.983 [2024-07-15 11:21:10.406428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.550 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.550 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:27.550 11:21:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.551 [2024-07-15 11:21:11.106854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.551 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.808 Malloc0 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.808 Delay0 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.808 [2024-07-15 11:21:11.177234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.808 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.809 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.809 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.809 11:21:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.809 11:21:11 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:27.809 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.809 [2024-07-15 11:21:11.257249] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.712 Initializing NVMe Controllers 00:10:29.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:29.712 controller IO queue size 128 less than required 00:10:29.712 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:29.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:29.712 Initialization complete. Launching workers. 00:10:29.712 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42713 00:10:29.712 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42774, failed to submit 62 00:10:29.712 success 42717, unsuccess 57, failed 0 00:10:29.713 11:21:13 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.713 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.713 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.970 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.970 11:21:13 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:29.970 11:21:13 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.971 rmmod nvme_tcp 00:10:29.971 rmmod nvme_fabrics 00:10:29.971 rmmod nvme_keyring 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 489985 ']' 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 489985 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 489985 ']' 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 489985 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 489985 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 489985' 00:10:29.971 killing process with pid 489985 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 489985 00:10:29.971 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 489985 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.229 11:21:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.135 11:21:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.135 00:10:32.135 real 0m11.411s 00:10:32.135 user 0m12.838s 00:10:32.135 sys 0m5.308s 00:10:32.135 11:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.135 11:21:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.135 ************************************ 00:10:32.135 END TEST nvmf_abort 00:10:32.135 ************************************ 00:10:32.135 11:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:32.135 11:21:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:32.135 11:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:32.135 11:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.135 11:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.396 ************************************ 00:10:32.396 START TEST nvmf_ns_hotplug_stress 00:10:32.396 ************************************ 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:32.396 * Looking for test storage... 00:10:32.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.396 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:38.968 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:38.968 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:38.968 Found net devices under 0000:86:00.0: cvl_0_0 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:38.968 Found net devices under 0000:86:00.1: cvl_0_1 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.968 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:10:38.969 00:10:38.969 --- 10.0.0.2 ping statistics --- 00:10:38.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.969 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:10:38.969 00:10:38.969 --- 10.0.0.1 ping statistics --- 00:10:38.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.969 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=494046 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 494046 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 494046 ']' 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.969 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.969 [2024-07-15 11:21:21.655331] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:38.969 [2024-07-15 11:21:21.655380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.969 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.969 [2024-07-15 11:21:21.727823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.969 [2024-07-15 11:21:21.806717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.969 [2024-07-15 11:21:21.806749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.969 [2024-07-15 11:21:21.806757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.969 [2024-07-15 11:21:21.806762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.969 [2024-07-15 11:21:21.806767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.969 [2024-07-15 11:21:21.806878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.969 [2024-07-15 11:21:21.806999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.969 [2024-07-15 11:21:21.806999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:38.969 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:39.228 [2024-07-15 11:21:22.664349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.228 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.487 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.487 [2024-07-15 11:21:23.037686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.487 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.745 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:40.004 Malloc0 00:10:40.004 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:40.262 Delay0 00:10:40.262 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.262 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:40.521 NULL1 00:10:40.521 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:40.780 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:40.780 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=494438 00:10:40.780 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:40.780 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.780 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.780 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.039 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:41.039 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:41.298 true 00:10:41.298 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:41.298 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.556 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.556 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:41.556 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:41.814 true 00:10:41.814 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:41.814 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.072 Read completed with error (sct=0, sc=11) 00:10:42.072 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.072 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:42.073 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:42.331 true 00:10:42.331 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:42.331 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.267 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.267 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:43.267 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:43.525 true 00:10:43.525 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:43.525 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.782 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.040 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:44.040 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:44.040 true 00:10:44.040 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:44.040 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.297 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.555 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:44.555 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:44.555 true 00:10:44.555 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:44.555 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.812 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.073 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:45.073 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:45.073 true 00:10:45.073 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:45.073 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.505 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:46.505 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:46.762 true 00:10:46.762 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:46.762 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.694 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.694 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:47.694 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:47.952 true 00:10:47.952 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:47.952 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.952 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.209 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:48.209 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:48.465 true 00:10:48.465 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:48.465 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.396 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.654 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:49.654 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:49.911 true 00:10:49.911 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:49.911 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.848 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.848 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:50.848 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:51.107 true 00:10:51.107 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:51.107 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.366 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.366 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:51.366 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:51.624 true 00:10:51.624 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:51.624 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.000 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.000 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:53.000 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:53.259 true 00:10:53.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:53.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.196 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.196 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:54.196 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:54.455 true 00:10:54.455 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:54.455 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.455 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.714 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:54.714 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:54.973 true 00:10:54.973 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:54.973 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.233 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.233 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:55.233 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:55.491 true 00:10:55.491 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:55.491 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.750 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.750 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:55.750 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:56.008 true 00:10:56.008 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:56.008 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.385 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.385 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:57.385 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:57.643 true 00:10:57.643 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:57.643 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.580 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.580 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:58.580 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:58.839 true 00:10:58.839 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:58.839 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.839 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.097 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:59.097 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:59.354 true 00:10:59.354 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:10:59.354 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.544 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:00.544 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:00.801 true 00:11:00.801 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:00.801 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.788 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.788 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:01.788 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:02.045 true 00:11:02.045 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:02.045 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.303 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.303 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:02.303 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:02.561 true 00:11:02.561 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:02.561 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.820 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.077 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:03.077 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:03.077 true 00:11:03.077 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:03.077 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.335 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.594 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:03.594 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:03.594 true 00:11:03.594 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:03.594 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.970 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.970 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:04.970 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:05.229 true 00:11:05.229 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:05.229 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.166 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.424 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:06.424 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:06.424 true 00:11:06.424 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:06.424 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.683 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.942 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:06.942 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:06.942 true 00:11:06.942 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:06.942 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.201 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.458 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:07.458 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:07.458 true 00:11:07.458 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:07.458 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.716 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.975 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:07.975 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:07.975 true 00:11:08.234 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:08.234 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.234 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.492 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:08.492 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:08.750 true 00:11:08.750 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:08.750 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.750 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.010 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:09.010 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:09.269 true 00:11:09.269 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:09.269 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.528 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.529 11:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:09.529 11:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:09.788 true 00:11:09.788 11:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:09.788 11:21:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.721 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.722 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:10.722 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:10.980 true 00:11:10.980 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:10.980 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.980 Initializing NVMe Controllers 00:11:10.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.980 Controller IO queue size 128, less than required. 00:11:10.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:10.980 Controller IO queue size 128, less than required. 00:11:10.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:10.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:10.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:10.980 Initialization complete. Launching workers. 00:11:10.980 ======================================================== 00:11:10.980 Latency(us) 00:11:10.980 Device Information : IOPS MiB/s Average min max 00:11:10.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1829.83 0.89 38306.11 1589.38 1018978.59 00:11:10.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14276.45 6.97 8938.13 2484.60 455886.43 00:11:10.980 ======================================================== 00:11:10.981 Total : 16106.28 7.86 12274.61 1589.38 1018978.59 00:11:10.981 00:11:11.239 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.497 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:11.497 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:11.497 true 00:11:11.497 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 494438 00:11:11.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (494438) - No such process 00:11:11.497 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 494438 00:11:11.497 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.756 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:12.014 null0 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.014 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:12.271 null1 00:11:12.271 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.271 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.271 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:12.529 null2 00:11:12.529 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.529 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.529 11:21:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:12.529 null3 00:11:12.529 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.529 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.529 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:12.786 null4 00:11:12.786 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.786 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.786 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:13.044 null5 00:11:13.044 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.044 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.044 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:13.302 null6 00:11:13.302 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.302 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.302 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:13.302 null7 00:11:13.302 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 500041 500042 500043 500046 500048 500050 500052 500054 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.303 11:21:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.561 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.818 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.076 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.334 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.592 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.850 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.851 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.110 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.368 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.650 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.650 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.650 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.650 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.650 11:21:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.650 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.909 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.168 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.426 11:21:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.684 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.942 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.201 rmmod nvme_tcp 00:11:17.201 rmmod nvme_fabrics 00:11:17.201 rmmod nvme_keyring 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:17.201 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 494046 ']' 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 494046 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 494046 ']' 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 494046 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 494046 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 494046' 00:11:17.461 killing process with pid 494046 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 494046 00:11:17.461 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 494046 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.461 11:22:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.995 11:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.995 00:11:19.995 real 0m47.347s 00:11:19.995 user 3m14.424s 00:11:19.995 sys 0m15.767s 00:11:19.995 11:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.995 11:22:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.995 ************************************ 00:11:19.995 END TEST nvmf_ns_hotplug_stress 00:11:19.995 ************************************ 00:11:19.995 11:22:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:19.995 11:22:03 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:19.995 11:22:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.995 11:22:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.995 11:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.995 ************************************ 00:11:19.995 START TEST nvmf_connect_stress 00:11:19.995 ************************************ 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:19.995 * Looking for test storage... 00:11:19.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.995 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:25.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:25.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:25.331 Found net devices under 0000:86:00.0: cvl_0_0 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:25.331 Found net devices under 0000:86:00.1: cvl_0_1 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.331 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.590 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.590 11:22:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:11:25.590 00:11:25.590 --- 10.0.0.2 ping statistics --- 00:11:25.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.590 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:11:25.590 00:11:25.590 --- 10.0.0.1 ping statistics --- 00:11:25.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.590 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=504418 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 504418 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 504418 ']' 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.590 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.590 [2024-07-15 11:22:09.110212] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:25.591 [2024-07-15 11:22:09.110261] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.591 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.591 [2024-07-15 11:22:09.178250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.849 [2024-07-15 11:22:09.254875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.849 [2024-07-15 11:22:09.254911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.849 [2024-07-15 11:22:09.254917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.849 [2024-07-15 11:22:09.254923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.849 [2024-07-15 11:22:09.254928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.849 [2024-07-15 11:22:09.255038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.849 [2024-07-15 11:22:09.255146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.849 [2024-07-15 11:22:09.255146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.417 [2024-07-15 11:22:09.951981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.417 [2024-07-15 11:22:09.975312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.417 NULL1 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=504449 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.417 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.417 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.417 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.417 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.417 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.676 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.934 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.934 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:26.934 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.934 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.934 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.193 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.193 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:27.193 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.193 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.193 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.759 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.759 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:27.759 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.759 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.759 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.017 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.017 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:28.017 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.017 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.017 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.275 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.275 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:28.275 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.275 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.275 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.534 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.534 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:28.534 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.534 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.534 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.792 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.792 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:28.792 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.792 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.792 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.358 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.358 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:29.358 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.358 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.358 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.616 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.616 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:29.616 11:22:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.616 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.616 11:22:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.874 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.874 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:29.874 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.874 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.874 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.132 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.132 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:30.132 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.132 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.132 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.389 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.389 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:30.389 11:22:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.389 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.389 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.953 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.953 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:30.953 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.953 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.953 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.210 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.210 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:31.210 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.210 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.210 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.466 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:31.466 11:22:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.466 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.466 11:22:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.723 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.723 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:31.723 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.723 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.723 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.979 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.979 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:31.979 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.979 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.979 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.544 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.544 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:32.544 11:22:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.544 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.544 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.802 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.802 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:32.802 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.802 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.802 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.060 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.060 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:33.060 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.061 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.061 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.372 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.372 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:33.373 11:22:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.373 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.373 11:22:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.631 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.631 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:33.631 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.631 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.631 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.904 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.905 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:33.905 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.905 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.905 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.472 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.472 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:34.472 11:22:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.472 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.472 11:22:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.730 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.730 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:34.730 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.730 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.730 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.988 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.988 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:34.988 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.988 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.988 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.246 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.246 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:35.246 11:22:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.246 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.246 11:22:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.812 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.812 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:35.812 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.812 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.812 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.071 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.071 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:36.071 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.071 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.071 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.329 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:36.329 11:22:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.329 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.329 11:22:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.587 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.587 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:36.587 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.587 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.587 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.587 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 504449 00:11:36.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (504449) - No such process 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 504449 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.845 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.845 rmmod nvme_tcp 00:11:37.104 rmmod nvme_fabrics 00:11:37.104 rmmod nvme_keyring 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 504418 ']' 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 504418 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 504418 ']' 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 504418 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 504418 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 504418' 00:11:37.104 killing process with pid 504418 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 504418 00:11:37.104 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 504418 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.363 11:22:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.287 11:22:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.287 00:11:39.287 real 0m19.625s 00:11:39.287 user 0m42.089s 00:11:39.287 sys 0m8.163s 00:11:39.287 11:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.287 11:22:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 ************************************ 00:11:39.287 END TEST nvmf_connect_stress 00:11:39.287 ************************************ 00:11:39.287 11:22:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.287 11:22:22 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.287 11:22:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.287 11:22:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.287 11:22:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 ************************************ 00:11:39.287 START TEST nvmf_fused_ordering 00:11:39.287 ************************************ 00:11:39.287 11:22:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.545 * Looking for test storage... 00:11:39.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.545 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.546 11:22:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.118 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:46.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:46.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:46.119 Found net devices under 0000:86:00.0: cvl_0_0 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:46.119 Found net devices under 0000:86:00.1: cvl_0_1 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:46.119 00:11:46.119 --- 10.0.0.2 ping statistics --- 00:11:46.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.119 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:11:46.119 00:11:46.119 --- 10.0.0.1 ping statistics --- 00:11:46.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.119 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=509822 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 509822 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 509822 ']' 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.119 11:22:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 [2024-07-15 11:22:28.820371] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:46.119 [2024-07-15 11:22:28.820415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.119 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.119 [2024-07-15 11:22:28.890438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.119 [2024-07-15 11:22:28.969174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.119 [2024-07-15 11:22:28.969208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.119 [2024-07-15 11:22:28.969215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.119 [2024-07-15 11:22:28.969221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.119 [2024-07-15 11:22:28.969230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.119 [2024-07-15 11:22:28.969252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 [2024-07-15 11:22:29.663982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.119 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.120 [2024-07-15 11:22:29.684115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.120 NULL1 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.120 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.379 11:22:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.379 11:22:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:46.379 [2024-07-15 11:22:29.738139] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:46.379 [2024-07-15 11:22:29.738169] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509882 ] 00:11:46.379 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.636 Attached to nqn.2016-06.io.spdk:cnode1 00:11:46.636 Namespace ID: 1 size: 1GB 00:11:46.636 fused_ordering(0) 00:11:46.636 fused_ordering(1) 00:11:46.636 fused_ordering(2) 00:11:46.636 fused_ordering(3) 00:11:46.636 fused_ordering(4) 00:11:46.636 fused_ordering(5) 00:11:46.636 fused_ordering(6) 00:11:46.636 fused_ordering(7) 00:11:46.636 fused_ordering(8) 00:11:46.636 fused_ordering(9) 00:11:46.636 fused_ordering(10) 00:11:46.636 fused_ordering(11) 00:11:46.636 fused_ordering(12) 00:11:46.636 fused_ordering(13) 00:11:46.636 fused_ordering(14) 00:11:46.636 fused_ordering(15) 00:11:46.636 fused_ordering(16) 00:11:46.636 fused_ordering(17) 00:11:46.636 fused_ordering(18) 00:11:46.636 fused_ordering(19) 00:11:46.636 fused_ordering(20) 00:11:46.636 fused_ordering(21) 00:11:46.636 fused_ordering(22) 00:11:46.636 fused_ordering(23) 00:11:46.636 fused_ordering(24) 00:11:46.636 fused_ordering(25) 00:11:46.636 fused_ordering(26) 00:11:46.636 fused_ordering(27) 00:11:46.636 fused_ordering(28) 00:11:46.636 fused_ordering(29) 00:11:46.636 fused_ordering(30) 00:11:46.637 fused_ordering(31) 00:11:46.637 fused_ordering(32) 00:11:46.637 fused_ordering(33) 00:11:46.637 fused_ordering(34) 00:11:46.637 fused_ordering(35) 00:11:46.637 fused_ordering(36) 00:11:46.637 fused_ordering(37) 00:11:46.637 fused_ordering(38) 00:11:46.637 fused_ordering(39) 00:11:46.637 fused_ordering(40) 00:11:46.637 fused_ordering(41) 00:11:46.637 fused_ordering(42) 00:11:46.637 fused_ordering(43) 00:11:46.637 fused_ordering(44) 00:11:46.637 fused_ordering(45) 00:11:46.637 fused_ordering(46) 00:11:46.637 fused_ordering(47) 00:11:46.637 fused_ordering(48) 00:11:46.637 fused_ordering(49) 00:11:46.637 fused_ordering(50) 00:11:46.637 fused_ordering(51) 00:11:46.637 fused_ordering(52) 00:11:46.637 fused_ordering(53) 00:11:46.637 fused_ordering(54) 00:11:46.637 fused_ordering(55) 00:11:46.637 fused_ordering(56) 00:11:46.637 fused_ordering(57) 00:11:46.637 fused_ordering(58) 00:11:46.637 fused_ordering(59) 00:11:46.637 fused_ordering(60) 00:11:46.637 fused_ordering(61) 00:11:46.637 fused_ordering(62) 00:11:46.637 fused_ordering(63) 00:11:46.637 fused_ordering(64) 00:11:46.637 fused_ordering(65) 00:11:46.637 fused_ordering(66) 00:11:46.637 fused_ordering(67) 00:11:46.637 fused_ordering(68) 00:11:46.637 fused_ordering(69) 00:11:46.637 fused_ordering(70) 00:11:46.637 fused_ordering(71) 00:11:46.637 fused_ordering(72) 00:11:46.637 fused_ordering(73) 00:11:46.637 fused_ordering(74) 00:11:46.637 fused_ordering(75) 00:11:46.637 fused_ordering(76) 00:11:46.637 fused_ordering(77) 00:11:46.637 fused_ordering(78) 00:11:46.637 fused_ordering(79) 00:11:46.637 fused_ordering(80) 00:11:46.637 fused_ordering(81) 00:11:46.637 fused_ordering(82) 00:11:46.637 fused_ordering(83) 00:11:46.637 fused_ordering(84) 00:11:46.637 fused_ordering(85) 00:11:46.637 fused_ordering(86) 00:11:46.637 fused_ordering(87) 00:11:46.637 fused_ordering(88) 00:11:46.637 fused_ordering(89) 00:11:46.637 fused_ordering(90) 00:11:46.637 fused_ordering(91) 00:11:46.637 fused_ordering(92) 00:11:46.637 fused_ordering(93) 00:11:46.637 fused_ordering(94) 00:11:46.637 fused_ordering(95) 00:11:46.637 fused_ordering(96) 00:11:46.637 fused_ordering(97) 00:11:46.637 fused_ordering(98) 00:11:46.637 fused_ordering(99) 00:11:46.637 fused_ordering(100) 00:11:46.637 fused_ordering(101) 00:11:46.637 fused_ordering(102) 00:11:46.637 fused_ordering(103) 00:11:46.637 fused_ordering(104) 00:11:46.637 fused_ordering(105) 00:11:46.637 fused_ordering(106) 00:11:46.637 fused_ordering(107) 00:11:46.637 fused_ordering(108) 00:11:46.637 fused_ordering(109) 00:11:46.637 fused_ordering(110) 00:11:46.637 fused_ordering(111) 00:11:46.637 fused_ordering(112) 00:11:46.637 fused_ordering(113) 00:11:46.637 fused_ordering(114) 00:11:46.637 fused_ordering(115) 00:11:46.637 fused_ordering(116) 00:11:46.637 fused_ordering(117) 00:11:46.637 fused_ordering(118) 00:11:46.637 fused_ordering(119) 00:11:46.637 fused_ordering(120) 00:11:46.637 fused_ordering(121) 00:11:46.637 fused_ordering(122) 00:11:46.637 fused_ordering(123) 00:11:46.637 fused_ordering(124) 00:11:46.637 fused_ordering(125) 00:11:46.637 fused_ordering(126) 00:11:46.637 fused_ordering(127) 00:11:46.637 fused_ordering(128) 00:11:46.637 fused_ordering(129) 00:11:46.637 fused_ordering(130) 00:11:46.637 fused_ordering(131) 00:11:46.637 fused_ordering(132) 00:11:46.637 fused_ordering(133) 00:11:46.637 fused_ordering(134) 00:11:46.637 fused_ordering(135) 00:11:46.637 fused_ordering(136) 00:11:46.637 fused_ordering(137) 00:11:46.637 fused_ordering(138) 00:11:46.637 fused_ordering(139) 00:11:46.637 fused_ordering(140) 00:11:46.637 fused_ordering(141) 00:11:46.637 fused_ordering(142) 00:11:46.637 fused_ordering(143) 00:11:46.637 fused_ordering(144) 00:11:46.637 fused_ordering(145) 00:11:46.637 fused_ordering(146) 00:11:46.637 fused_ordering(147) 00:11:46.637 fused_ordering(148) 00:11:46.637 fused_ordering(149) 00:11:46.637 fused_ordering(150) 00:11:46.637 fused_ordering(151) 00:11:46.637 fused_ordering(152) 00:11:46.637 fused_ordering(153) 00:11:46.637 fused_ordering(154) 00:11:46.637 fused_ordering(155) 00:11:46.637 fused_ordering(156) 00:11:46.637 fused_ordering(157) 00:11:46.637 fused_ordering(158) 00:11:46.637 fused_ordering(159) 00:11:46.637 fused_ordering(160) 00:11:46.637 fused_ordering(161) 00:11:46.637 fused_ordering(162) 00:11:46.637 fused_ordering(163) 00:11:46.637 fused_ordering(164) 00:11:46.637 fused_ordering(165) 00:11:46.637 fused_ordering(166) 00:11:46.637 fused_ordering(167) 00:11:46.637 fused_ordering(168) 00:11:46.637 fused_ordering(169) 00:11:46.637 fused_ordering(170) 00:11:46.637 fused_ordering(171) 00:11:46.637 fused_ordering(172) 00:11:46.637 fused_ordering(173) 00:11:46.637 fused_ordering(174) 00:11:46.637 fused_ordering(175) 00:11:46.637 fused_ordering(176) 00:11:46.637 fused_ordering(177) 00:11:46.637 fused_ordering(178) 00:11:46.637 fused_ordering(179) 00:11:46.637 fused_ordering(180) 00:11:46.637 fused_ordering(181) 00:11:46.637 fused_ordering(182) 00:11:46.637 fused_ordering(183) 00:11:46.637 fused_ordering(184) 00:11:46.637 fused_ordering(185) 00:11:46.637 fused_ordering(186) 00:11:46.637 fused_ordering(187) 00:11:46.637 fused_ordering(188) 00:11:46.637 fused_ordering(189) 00:11:46.637 fused_ordering(190) 00:11:46.637 fused_ordering(191) 00:11:46.637 fused_ordering(192) 00:11:46.637 fused_ordering(193) 00:11:46.637 fused_ordering(194) 00:11:46.637 fused_ordering(195) 00:11:46.637 fused_ordering(196) 00:11:46.637 fused_ordering(197) 00:11:46.637 fused_ordering(198) 00:11:46.637 fused_ordering(199) 00:11:46.637 fused_ordering(200) 00:11:46.637 fused_ordering(201) 00:11:46.637 fused_ordering(202) 00:11:46.637 fused_ordering(203) 00:11:46.637 fused_ordering(204) 00:11:46.637 fused_ordering(205) 00:11:46.895 fused_ordering(206) 00:11:46.895 fused_ordering(207) 00:11:46.895 fused_ordering(208) 00:11:46.895 fused_ordering(209) 00:11:46.895 fused_ordering(210) 00:11:46.895 fused_ordering(211) 00:11:46.895 fused_ordering(212) 00:11:46.895 fused_ordering(213) 00:11:46.895 fused_ordering(214) 00:11:46.895 fused_ordering(215) 00:11:46.895 fused_ordering(216) 00:11:46.895 fused_ordering(217) 00:11:46.895 fused_ordering(218) 00:11:46.895 fused_ordering(219) 00:11:46.895 fused_ordering(220) 00:11:46.895 fused_ordering(221) 00:11:46.895 fused_ordering(222) 00:11:46.895 fused_ordering(223) 00:11:46.895 fused_ordering(224) 00:11:46.895 fused_ordering(225) 00:11:46.895 fused_ordering(226) 00:11:46.895 fused_ordering(227) 00:11:46.895 fused_ordering(228) 00:11:46.895 fused_ordering(229) 00:11:46.895 fused_ordering(230) 00:11:46.895 fused_ordering(231) 00:11:46.895 fused_ordering(232) 00:11:46.895 fused_ordering(233) 00:11:46.895 fused_ordering(234) 00:11:46.895 fused_ordering(235) 00:11:46.895 fused_ordering(236) 00:11:46.895 fused_ordering(237) 00:11:46.895 fused_ordering(238) 00:11:46.895 fused_ordering(239) 00:11:46.895 fused_ordering(240) 00:11:46.895 fused_ordering(241) 00:11:46.895 fused_ordering(242) 00:11:46.895 fused_ordering(243) 00:11:46.895 fused_ordering(244) 00:11:46.895 fused_ordering(245) 00:11:46.895 fused_ordering(246) 00:11:46.895 fused_ordering(247) 00:11:46.895 fused_ordering(248) 00:11:46.895 fused_ordering(249) 00:11:46.895 fused_ordering(250) 00:11:46.895 fused_ordering(251) 00:11:46.895 fused_ordering(252) 00:11:46.895 fused_ordering(253) 00:11:46.895 fused_ordering(254) 00:11:46.895 fused_ordering(255) 00:11:46.895 fused_ordering(256) 00:11:46.895 fused_ordering(257) 00:11:46.895 fused_ordering(258) 00:11:46.895 fused_ordering(259) 00:11:46.895 fused_ordering(260) 00:11:46.895 fused_ordering(261) 00:11:46.895 fused_ordering(262) 00:11:46.895 fused_ordering(263) 00:11:46.895 fused_ordering(264) 00:11:46.895 fused_ordering(265) 00:11:46.895 fused_ordering(266) 00:11:46.895 fused_ordering(267) 00:11:46.895 fused_ordering(268) 00:11:46.895 fused_ordering(269) 00:11:46.895 fused_ordering(270) 00:11:46.895 fused_ordering(271) 00:11:46.895 fused_ordering(272) 00:11:46.895 fused_ordering(273) 00:11:46.895 fused_ordering(274) 00:11:46.895 fused_ordering(275) 00:11:46.895 fused_ordering(276) 00:11:46.895 fused_ordering(277) 00:11:46.895 fused_ordering(278) 00:11:46.895 fused_ordering(279) 00:11:46.895 fused_ordering(280) 00:11:46.895 fused_ordering(281) 00:11:46.895 fused_ordering(282) 00:11:46.895 fused_ordering(283) 00:11:46.895 fused_ordering(284) 00:11:46.895 fused_ordering(285) 00:11:46.895 fused_ordering(286) 00:11:46.895 fused_ordering(287) 00:11:46.895 fused_ordering(288) 00:11:46.895 fused_ordering(289) 00:11:46.895 fused_ordering(290) 00:11:46.895 fused_ordering(291) 00:11:46.895 fused_ordering(292) 00:11:46.895 fused_ordering(293) 00:11:46.895 fused_ordering(294) 00:11:46.895 fused_ordering(295) 00:11:46.895 fused_ordering(296) 00:11:46.895 fused_ordering(297) 00:11:46.895 fused_ordering(298) 00:11:46.895 fused_ordering(299) 00:11:46.895 fused_ordering(300) 00:11:46.895 fused_ordering(301) 00:11:46.895 fused_ordering(302) 00:11:46.895 fused_ordering(303) 00:11:46.895 fused_ordering(304) 00:11:46.895 fused_ordering(305) 00:11:46.895 fused_ordering(306) 00:11:46.895 fused_ordering(307) 00:11:46.895 fused_ordering(308) 00:11:46.895 fused_ordering(309) 00:11:46.895 fused_ordering(310) 00:11:46.895 fused_ordering(311) 00:11:46.895 fused_ordering(312) 00:11:46.895 fused_ordering(313) 00:11:46.895 fused_ordering(314) 00:11:46.895 fused_ordering(315) 00:11:46.895 fused_ordering(316) 00:11:46.895 fused_ordering(317) 00:11:46.895 fused_ordering(318) 00:11:46.895 fused_ordering(319) 00:11:46.895 fused_ordering(320) 00:11:46.895 fused_ordering(321) 00:11:46.895 fused_ordering(322) 00:11:46.895 fused_ordering(323) 00:11:46.895 fused_ordering(324) 00:11:46.895 fused_ordering(325) 00:11:46.895 fused_ordering(326) 00:11:46.895 fused_ordering(327) 00:11:46.895 fused_ordering(328) 00:11:46.895 fused_ordering(329) 00:11:46.895 fused_ordering(330) 00:11:46.895 fused_ordering(331) 00:11:46.895 fused_ordering(332) 00:11:46.895 fused_ordering(333) 00:11:46.895 fused_ordering(334) 00:11:46.895 fused_ordering(335) 00:11:46.895 fused_ordering(336) 00:11:46.895 fused_ordering(337) 00:11:46.895 fused_ordering(338) 00:11:46.895 fused_ordering(339) 00:11:46.895 fused_ordering(340) 00:11:46.895 fused_ordering(341) 00:11:46.895 fused_ordering(342) 00:11:46.895 fused_ordering(343) 00:11:46.895 fused_ordering(344) 00:11:46.895 fused_ordering(345) 00:11:46.895 fused_ordering(346) 00:11:46.895 fused_ordering(347) 00:11:46.895 fused_ordering(348) 00:11:46.895 fused_ordering(349) 00:11:46.895 fused_ordering(350) 00:11:46.895 fused_ordering(351) 00:11:46.895 fused_ordering(352) 00:11:46.895 fused_ordering(353) 00:11:46.895 fused_ordering(354) 00:11:46.895 fused_ordering(355) 00:11:46.895 fused_ordering(356) 00:11:46.895 fused_ordering(357) 00:11:46.895 fused_ordering(358) 00:11:46.895 fused_ordering(359) 00:11:46.895 fused_ordering(360) 00:11:46.895 fused_ordering(361) 00:11:46.895 fused_ordering(362) 00:11:46.895 fused_ordering(363) 00:11:46.895 fused_ordering(364) 00:11:46.895 fused_ordering(365) 00:11:46.895 fused_ordering(366) 00:11:46.895 fused_ordering(367) 00:11:46.895 fused_ordering(368) 00:11:46.895 fused_ordering(369) 00:11:46.895 fused_ordering(370) 00:11:46.895 fused_ordering(371) 00:11:46.895 fused_ordering(372) 00:11:46.895 fused_ordering(373) 00:11:46.895 fused_ordering(374) 00:11:46.895 fused_ordering(375) 00:11:46.895 fused_ordering(376) 00:11:46.895 fused_ordering(377) 00:11:46.895 fused_ordering(378) 00:11:46.895 fused_ordering(379) 00:11:46.895 fused_ordering(380) 00:11:46.895 fused_ordering(381) 00:11:46.895 fused_ordering(382) 00:11:46.895 fused_ordering(383) 00:11:46.895 fused_ordering(384) 00:11:46.895 fused_ordering(385) 00:11:46.895 fused_ordering(386) 00:11:46.895 fused_ordering(387) 00:11:46.895 fused_ordering(388) 00:11:46.895 fused_ordering(389) 00:11:46.895 fused_ordering(390) 00:11:46.895 fused_ordering(391) 00:11:46.895 fused_ordering(392) 00:11:46.895 fused_ordering(393) 00:11:46.895 fused_ordering(394) 00:11:46.895 fused_ordering(395) 00:11:46.895 fused_ordering(396) 00:11:46.895 fused_ordering(397) 00:11:46.895 fused_ordering(398) 00:11:46.895 fused_ordering(399) 00:11:46.895 fused_ordering(400) 00:11:46.895 fused_ordering(401) 00:11:46.895 fused_ordering(402) 00:11:46.895 fused_ordering(403) 00:11:46.895 fused_ordering(404) 00:11:46.895 fused_ordering(405) 00:11:46.895 fused_ordering(406) 00:11:46.895 fused_ordering(407) 00:11:46.895 fused_ordering(408) 00:11:46.895 fused_ordering(409) 00:11:46.895 fused_ordering(410) 00:11:47.153 fused_ordering(411) 00:11:47.153 fused_ordering(412) 00:11:47.153 fused_ordering(413) 00:11:47.154 fused_ordering(414) 00:11:47.154 fused_ordering(415) 00:11:47.154 fused_ordering(416) 00:11:47.154 fused_ordering(417) 00:11:47.154 fused_ordering(418) 00:11:47.154 fused_ordering(419) 00:11:47.154 fused_ordering(420) 00:11:47.154 fused_ordering(421) 00:11:47.154 fused_ordering(422) 00:11:47.154 fused_ordering(423) 00:11:47.154 fused_ordering(424) 00:11:47.154 fused_ordering(425) 00:11:47.154 fused_ordering(426) 00:11:47.154 fused_ordering(427) 00:11:47.154 fused_ordering(428) 00:11:47.154 fused_ordering(429) 00:11:47.154 fused_ordering(430) 00:11:47.154 fused_ordering(431) 00:11:47.154 fused_ordering(432) 00:11:47.154 fused_ordering(433) 00:11:47.154 fused_ordering(434) 00:11:47.154 fused_ordering(435) 00:11:47.154 fused_ordering(436) 00:11:47.154 fused_ordering(437) 00:11:47.154 fused_ordering(438) 00:11:47.154 fused_ordering(439) 00:11:47.154 fused_ordering(440) 00:11:47.154 fused_ordering(441) 00:11:47.154 fused_ordering(442) 00:11:47.154 fused_ordering(443) 00:11:47.154 fused_ordering(444) 00:11:47.154 fused_ordering(445) 00:11:47.154 fused_ordering(446) 00:11:47.154 fused_ordering(447) 00:11:47.154 fused_ordering(448) 00:11:47.154 fused_ordering(449) 00:11:47.154 fused_ordering(450) 00:11:47.154 fused_ordering(451) 00:11:47.154 fused_ordering(452) 00:11:47.154 fused_ordering(453) 00:11:47.154 fused_ordering(454) 00:11:47.154 fused_ordering(455) 00:11:47.154 fused_ordering(456) 00:11:47.154 fused_ordering(457) 00:11:47.154 fused_ordering(458) 00:11:47.154 fused_ordering(459) 00:11:47.154 fused_ordering(460) 00:11:47.154 fused_ordering(461) 00:11:47.154 fused_ordering(462) 00:11:47.154 fused_ordering(463) 00:11:47.154 fused_ordering(464) 00:11:47.154 fused_ordering(465) 00:11:47.154 fused_ordering(466) 00:11:47.154 fused_ordering(467) 00:11:47.154 fused_ordering(468) 00:11:47.154 fused_ordering(469) 00:11:47.154 fused_ordering(470) 00:11:47.154 fused_ordering(471) 00:11:47.154 fused_ordering(472) 00:11:47.154 fused_ordering(473) 00:11:47.154 fused_ordering(474) 00:11:47.154 fused_ordering(475) 00:11:47.154 fused_ordering(476) 00:11:47.154 fused_ordering(477) 00:11:47.154 fused_ordering(478) 00:11:47.154 fused_ordering(479) 00:11:47.154 fused_ordering(480) 00:11:47.154 fused_ordering(481) 00:11:47.154 fused_ordering(482) 00:11:47.154 fused_ordering(483) 00:11:47.154 fused_ordering(484) 00:11:47.154 fused_ordering(485) 00:11:47.154 fused_ordering(486) 00:11:47.154 fused_ordering(487) 00:11:47.154 fused_ordering(488) 00:11:47.154 fused_ordering(489) 00:11:47.154 fused_ordering(490) 00:11:47.154 fused_ordering(491) 00:11:47.154 fused_ordering(492) 00:11:47.154 fused_ordering(493) 00:11:47.154 fused_ordering(494) 00:11:47.154 fused_ordering(495) 00:11:47.154 fused_ordering(496) 00:11:47.154 fused_ordering(497) 00:11:47.154 fused_ordering(498) 00:11:47.154 fused_ordering(499) 00:11:47.154 fused_ordering(500) 00:11:47.154 fused_ordering(501) 00:11:47.154 fused_ordering(502) 00:11:47.154 fused_ordering(503) 00:11:47.154 fused_ordering(504) 00:11:47.154 fused_ordering(505) 00:11:47.154 fused_ordering(506) 00:11:47.154 fused_ordering(507) 00:11:47.154 fused_ordering(508) 00:11:47.154 fused_ordering(509) 00:11:47.154 fused_ordering(510) 00:11:47.154 fused_ordering(511) 00:11:47.154 fused_ordering(512) 00:11:47.154 fused_ordering(513) 00:11:47.154 fused_ordering(514) 00:11:47.154 fused_ordering(515) 00:11:47.154 fused_ordering(516) 00:11:47.154 fused_ordering(517) 00:11:47.154 fused_ordering(518) 00:11:47.154 fused_ordering(519) 00:11:47.154 fused_ordering(520) 00:11:47.154 fused_ordering(521) 00:11:47.154 fused_ordering(522) 00:11:47.154 fused_ordering(523) 00:11:47.154 fused_ordering(524) 00:11:47.154 fused_ordering(525) 00:11:47.154 fused_ordering(526) 00:11:47.154 fused_ordering(527) 00:11:47.154 fused_ordering(528) 00:11:47.154 fused_ordering(529) 00:11:47.154 fused_ordering(530) 00:11:47.154 fused_ordering(531) 00:11:47.154 fused_ordering(532) 00:11:47.154 fused_ordering(533) 00:11:47.154 fused_ordering(534) 00:11:47.154 fused_ordering(535) 00:11:47.154 fused_ordering(536) 00:11:47.154 fused_ordering(537) 00:11:47.154 fused_ordering(538) 00:11:47.154 fused_ordering(539) 00:11:47.154 fused_ordering(540) 00:11:47.154 fused_ordering(541) 00:11:47.154 fused_ordering(542) 00:11:47.154 fused_ordering(543) 00:11:47.154 fused_ordering(544) 00:11:47.154 fused_ordering(545) 00:11:47.154 fused_ordering(546) 00:11:47.154 fused_ordering(547) 00:11:47.154 fused_ordering(548) 00:11:47.154 fused_ordering(549) 00:11:47.154 fused_ordering(550) 00:11:47.154 fused_ordering(551) 00:11:47.154 fused_ordering(552) 00:11:47.154 fused_ordering(553) 00:11:47.154 fused_ordering(554) 00:11:47.154 fused_ordering(555) 00:11:47.154 fused_ordering(556) 00:11:47.154 fused_ordering(557) 00:11:47.154 fused_ordering(558) 00:11:47.154 fused_ordering(559) 00:11:47.154 fused_ordering(560) 00:11:47.154 fused_ordering(561) 00:11:47.154 fused_ordering(562) 00:11:47.154 fused_ordering(563) 00:11:47.154 fused_ordering(564) 00:11:47.154 fused_ordering(565) 00:11:47.154 fused_ordering(566) 00:11:47.154 fused_ordering(567) 00:11:47.154 fused_ordering(568) 00:11:47.154 fused_ordering(569) 00:11:47.154 fused_ordering(570) 00:11:47.154 fused_ordering(571) 00:11:47.154 fused_ordering(572) 00:11:47.154 fused_ordering(573) 00:11:47.154 fused_ordering(574) 00:11:47.154 fused_ordering(575) 00:11:47.154 fused_ordering(576) 00:11:47.154 fused_ordering(577) 00:11:47.154 fused_ordering(578) 00:11:47.154 fused_ordering(579) 00:11:47.154 fused_ordering(580) 00:11:47.154 fused_ordering(581) 00:11:47.154 fused_ordering(582) 00:11:47.154 fused_ordering(583) 00:11:47.154 fused_ordering(584) 00:11:47.154 fused_ordering(585) 00:11:47.154 fused_ordering(586) 00:11:47.154 fused_ordering(587) 00:11:47.154 fused_ordering(588) 00:11:47.154 fused_ordering(589) 00:11:47.154 fused_ordering(590) 00:11:47.154 fused_ordering(591) 00:11:47.154 fused_ordering(592) 00:11:47.154 fused_ordering(593) 00:11:47.154 fused_ordering(594) 00:11:47.154 fused_ordering(595) 00:11:47.154 fused_ordering(596) 00:11:47.154 fused_ordering(597) 00:11:47.154 fused_ordering(598) 00:11:47.154 fused_ordering(599) 00:11:47.154 fused_ordering(600) 00:11:47.154 fused_ordering(601) 00:11:47.154 fused_ordering(602) 00:11:47.154 fused_ordering(603) 00:11:47.154 fused_ordering(604) 00:11:47.154 fused_ordering(605) 00:11:47.154 fused_ordering(606) 00:11:47.154 fused_ordering(607) 00:11:47.154 fused_ordering(608) 00:11:47.154 fused_ordering(609) 00:11:47.154 fused_ordering(610) 00:11:47.154 fused_ordering(611) 00:11:47.154 fused_ordering(612) 00:11:47.154 fused_ordering(613) 00:11:47.154 fused_ordering(614) 00:11:47.154 fused_ordering(615) 00:11:47.720 fused_ordering(616) 00:11:47.720 fused_ordering(617) 00:11:47.720 fused_ordering(618) 00:11:47.720 fused_ordering(619) 00:11:47.720 fused_ordering(620) 00:11:47.720 fused_ordering(621) 00:11:47.720 fused_ordering(622) 00:11:47.720 fused_ordering(623) 00:11:47.720 fused_ordering(624) 00:11:47.720 fused_ordering(625) 00:11:47.720 fused_ordering(626) 00:11:47.720 fused_ordering(627) 00:11:47.720 fused_ordering(628) 00:11:47.720 fused_ordering(629) 00:11:47.720 fused_ordering(630) 00:11:47.720 fused_ordering(631) 00:11:47.720 fused_ordering(632) 00:11:47.720 fused_ordering(633) 00:11:47.720 fused_ordering(634) 00:11:47.720 fused_ordering(635) 00:11:47.720 fused_ordering(636) 00:11:47.720 fused_ordering(637) 00:11:47.720 fused_ordering(638) 00:11:47.720 fused_ordering(639) 00:11:47.720 fused_ordering(640) 00:11:47.720 fused_ordering(641) 00:11:47.720 fused_ordering(642) 00:11:47.720 fused_ordering(643) 00:11:47.720 fused_ordering(644) 00:11:47.720 fused_ordering(645) 00:11:47.720 fused_ordering(646) 00:11:47.720 fused_ordering(647) 00:11:47.720 fused_ordering(648) 00:11:47.720 fused_ordering(649) 00:11:47.720 fused_ordering(650) 00:11:47.720 fused_ordering(651) 00:11:47.720 fused_ordering(652) 00:11:47.720 fused_ordering(653) 00:11:47.720 fused_ordering(654) 00:11:47.720 fused_ordering(655) 00:11:47.720 fused_ordering(656) 00:11:47.720 fused_ordering(657) 00:11:47.720 fused_ordering(658) 00:11:47.720 fused_ordering(659) 00:11:47.720 fused_ordering(660) 00:11:47.720 fused_ordering(661) 00:11:47.720 fused_ordering(662) 00:11:47.720 fused_ordering(663) 00:11:47.720 fused_ordering(664) 00:11:47.720 fused_ordering(665) 00:11:47.720 fused_ordering(666) 00:11:47.720 fused_ordering(667) 00:11:47.720 fused_ordering(668) 00:11:47.720 fused_ordering(669) 00:11:47.720 fused_ordering(670) 00:11:47.720 fused_ordering(671) 00:11:47.720 fused_ordering(672) 00:11:47.720 fused_ordering(673) 00:11:47.720 fused_ordering(674) 00:11:47.720 fused_ordering(675) 00:11:47.720 fused_ordering(676) 00:11:47.720 fused_ordering(677) 00:11:47.720 fused_ordering(678) 00:11:47.720 fused_ordering(679) 00:11:47.720 fused_ordering(680) 00:11:47.720 fused_ordering(681) 00:11:47.720 fused_ordering(682) 00:11:47.720 fused_ordering(683) 00:11:47.720 fused_ordering(684) 00:11:47.720 fused_ordering(685) 00:11:47.720 fused_ordering(686) 00:11:47.720 fused_ordering(687) 00:11:47.720 fused_ordering(688) 00:11:47.720 fused_ordering(689) 00:11:47.720 fused_ordering(690) 00:11:47.720 fused_ordering(691) 00:11:47.720 fused_ordering(692) 00:11:47.720 fused_ordering(693) 00:11:47.720 fused_ordering(694) 00:11:47.720 fused_ordering(695) 00:11:47.720 fused_ordering(696) 00:11:47.720 fused_ordering(697) 00:11:47.720 fused_ordering(698) 00:11:47.720 fused_ordering(699) 00:11:47.720 fused_ordering(700) 00:11:47.720 fused_ordering(701) 00:11:47.720 fused_ordering(702) 00:11:47.720 fused_ordering(703) 00:11:47.720 fused_ordering(704) 00:11:47.720 fused_ordering(705) 00:11:47.721 fused_ordering(706) 00:11:47.721 fused_ordering(707) 00:11:47.721 fused_ordering(708) 00:11:47.721 fused_ordering(709) 00:11:47.721 fused_ordering(710) 00:11:47.721 fused_ordering(711) 00:11:47.721 fused_ordering(712) 00:11:47.721 fused_ordering(713) 00:11:47.721 fused_ordering(714) 00:11:47.721 fused_ordering(715) 00:11:47.721 fused_ordering(716) 00:11:47.721 fused_ordering(717) 00:11:47.721 fused_ordering(718) 00:11:47.721 fused_ordering(719) 00:11:47.721 fused_ordering(720) 00:11:47.721 fused_ordering(721) 00:11:47.721 fused_ordering(722) 00:11:47.721 fused_ordering(723) 00:11:47.721 fused_ordering(724) 00:11:47.721 fused_ordering(725) 00:11:47.721 fused_ordering(726) 00:11:47.721 fused_ordering(727) 00:11:47.721 fused_ordering(728) 00:11:47.721 fused_ordering(729) 00:11:47.721 fused_ordering(730) 00:11:47.721 fused_ordering(731) 00:11:47.721 fused_ordering(732) 00:11:47.721 fused_ordering(733) 00:11:47.721 fused_ordering(734) 00:11:47.721 fused_ordering(735) 00:11:47.721 fused_ordering(736) 00:11:47.721 fused_ordering(737) 00:11:47.721 fused_ordering(738) 00:11:47.721 fused_ordering(739) 00:11:47.721 fused_ordering(740) 00:11:47.721 fused_ordering(741) 00:11:47.721 fused_ordering(742) 00:11:47.721 fused_ordering(743) 00:11:47.721 fused_ordering(744) 00:11:47.721 fused_ordering(745) 00:11:47.721 fused_ordering(746) 00:11:47.721 fused_ordering(747) 00:11:47.721 fused_ordering(748) 00:11:47.721 fused_ordering(749) 00:11:47.721 fused_ordering(750) 00:11:47.721 fused_ordering(751) 00:11:47.721 fused_ordering(752) 00:11:47.721 fused_ordering(753) 00:11:47.721 fused_ordering(754) 00:11:47.721 fused_ordering(755) 00:11:47.721 fused_ordering(756) 00:11:47.721 fused_ordering(757) 00:11:47.721 fused_ordering(758) 00:11:47.721 fused_ordering(759) 00:11:47.721 fused_ordering(760) 00:11:47.721 fused_ordering(761) 00:11:47.721 fused_ordering(762) 00:11:47.721 fused_ordering(763) 00:11:47.721 fused_ordering(764) 00:11:47.721 fused_ordering(765) 00:11:47.721 fused_ordering(766) 00:11:47.721 fused_ordering(767) 00:11:47.721 fused_ordering(768) 00:11:47.721 fused_ordering(769) 00:11:47.721 fused_ordering(770) 00:11:47.721 fused_ordering(771) 00:11:47.721 fused_ordering(772) 00:11:47.721 fused_ordering(773) 00:11:47.721 fused_ordering(774) 00:11:47.721 fused_ordering(775) 00:11:47.721 fused_ordering(776) 00:11:47.721 fused_ordering(777) 00:11:47.721 fused_ordering(778) 00:11:47.721 fused_ordering(779) 00:11:47.721 fused_ordering(780) 00:11:47.721 fused_ordering(781) 00:11:47.721 fused_ordering(782) 00:11:47.721 fused_ordering(783) 00:11:47.721 fused_ordering(784) 00:11:47.721 fused_ordering(785) 00:11:47.721 fused_ordering(786) 00:11:47.721 fused_ordering(787) 00:11:47.721 fused_ordering(788) 00:11:47.721 fused_ordering(789) 00:11:47.721 fused_ordering(790) 00:11:47.721 fused_ordering(791) 00:11:47.721 fused_ordering(792) 00:11:47.721 fused_ordering(793) 00:11:47.721 fused_ordering(794) 00:11:47.721 fused_ordering(795) 00:11:47.721 fused_ordering(796) 00:11:47.721 fused_ordering(797) 00:11:47.721 fused_ordering(798) 00:11:47.721 fused_ordering(799) 00:11:47.721 fused_ordering(800) 00:11:47.721 fused_ordering(801) 00:11:47.721 fused_ordering(802) 00:11:47.721 fused_ordering(803) 00:11:47.721 fused_ordering(804) 00:11:47.721 fused_ordering(805) 00:11:47.721 fused_ordering(806) 00:11:47.721 fused_ordering(807) 00:11:47.721 fused_ordering(808) 00:11:47.721 fused_ordering(809) 00:11:47.721 fused_ordering(810) 00:11:47.721 fused_ordering(811) 00:11:47.721 fused_ordering(812) 00:11:47.721 fused_ordering(813) 00:11:47.721 fused_ordering(814) 00:11:47.721 fused_ordering(815) 00:11:47.721 fused_ordering(816) 00:11:47.721 fused_ordering(817) 00:11:47.721 fused_ordering(818) 00:11:47.721 fused_ordering(819) 00:11:47.721 fused_ordering(820) 00:11:48.288 fused_ordering(821) 00:11:48.289 fused_ordering(822) 00:11:48.289 fused_ordering(823) 00:11:48.289 fused_ordering(824) 00:11:48.289 fused_ordering(825) 00:11:48.289 fused_ordering(826) 00:11:48.289 fused_ordering(827) 00:11:48.289 fused_ordering(828) 00:11:48.289 fused_ordering(829) 00:11:48.289 fused_ordering(830) 00:11:48.289 fused_ordering(831) 00:11:48.289 fused_ordering(832) 00:11:48.289 fused_ordering(833) 00:11:48.289 fused_ordering(834) 00:11:48.289 fused_ordering(835) 00:11:48.289 fused_ordering(836) 00:11:48.289 fused_ordering(837) 00:11:48.289 fused_ordering(838) 00:11:48.289 fused_ordering(839) 00:11:48.289 fused_ordering(840) 00:11:48.289 fused_ordering(841) 00:11:48.289 fused_ordering(842) 00:11:48.289 fused_ordering(843) 00:11:48.289 fused_ordering(844) 00:11:48.289 fused_ordering(845) 00:11:48.289 fused_ordering(846) 00:11:48.289 fused_ordering(847) 00:11:48.289 fused_ordering(848) 00:11:48.289 fused_ordering(849) 00:11:48.289 fused_ordering(850) 00:11:48.289 fused_ordering(851) 00:11:48.289 fused_ordering(852) 00:11:48.289 fused_ordering(853) 00:11:48.289 fused_ordering(854) 00:11:48.289 fused_ordering(855) 00:11:48.289 fused_ordering(856) 00:11:48.289 fused_ordering(857) 00:11:48.289 fused_ordering(858) 00:11:48.289 fused_ordering(859) 00:11:48.289 fused_ordering(860) 00:11:48.289 fused_ordering(861) 00:11:48.289 fused_ordering(862) 00:11:48.289 fused_ordering(863) 00:11:48.289 fused_ordering(864) 00:11:48.289 fused_ordering(865) 00:11:48.289 fused_ordering(866) 00:11:48.289 fused_ordering(867) 00:11:48.289 fused_ordering(868) 00:11:48.289 fused_ordering(869) 00:11:48.289 fused_ordering(870) 00:11:48.289 fused_ordering(871) 00:11:48.289 fused_ordering(872) 00:11:48.289 fused_ordering(873) 00:11:48.289 fused_ordering(874) 00:11:48.289 fused_ordering(875) 00:11:48.289 fused_ordering(876) 00:11:48.289 fused_ordering(877) 00:11:48.289 fused_ordering(878) 00:11:48.289 fused_ordering(879) 00:11:48.289 fused_ordering(880) 00:11:48.289 fused_ordering(881) 00:11:48.289 fused_ordering(882) 00:11:48.289 fused_ordering(883) 00:11:48.289 fused_ordering(884) 00:11:48.289 fused_ordering(885) 00:11:48.289 fused_ordering(886) 00:11:48.289 fused_ordering(887) 00:11:48.289 fused_ordering(888) 00:11:48.289 fused_ordering(889) 00:11:48.289 fused_ordering(890) 00:11:48.289 fused_ordering(891) 00:11:48.289 fused_ordering(892) 00:11:48.289 fused_ordering(893) 00:11:48.289 fused_ordering(894) 00:11:48.289 fused_ordering(895) 00:11:48.289 fused_ordering(896) 00:11:48.289 fused_ordering(897) 00:11:48.289 fused_ordering(898) 00:11:48.289 fused_ordering(899) 00:11:48.289 fused_ordering(900) 00:11:48.289 fused_ordering(901) 00:11:48.289 fused_ordering(902) 00:11:48.289 fused_ordering(903) 00:11:48.289 fused_ordering(904) 00:11:48.289 fused_ordering(905) 00:11:48.289 fused_ordering(906) 00:11:48.289 fused_ordering(907) 00:11:48.289 fused_ordering(908) 00:11:48.289 fused_ordering(909) 00:11:48.289 fused_ordering(910) 00:11:48.289 fused_ordering(911) 00:11:48.289 fused_ordering(912) 00:11:48.289 fused_ordering(913) 00:11:48.289 fused_ordering(914) 00:11:48.289 fused_ordering(915) 00:11:48.289 fused_ordering(916) 00:11:48.289 fused_ordering(917) 00:11:48.289 fused_ordering(918) 00:11:48.289 fused_ordering(919) 00:11:48.289 fused_ordering(920) 00:11:48.289 fused_ordering(921) 00:11:48.289 fused_ordering(922) 00:11:48.289 fused_ordering(923) 00:11:48.289 fused_ordering(924) 00:11:48.289 fused_ordering(925) 00:11:48.289 fused_ordering(926) 00:11:48.289 fused_ordering(927) 00:11:48.289 fused_ordering(928) 00:11:48.289 fused_ordering(929) 00:11:48.289 fused_ordering(930) 00:11:48.289 fused_ordering(931) 00:11:48.289 fused_ordering(932) 00:11:48.289 fused_ordering(933) 00:11:48.289 fused_ordering(934) 00:11:48.289 fused_ordering(935) 00:11:48.289 fused_ordering(936) 00:11:48.289 fused_ordering(937) 00:11:48.289 fused_ordering(938) 00:11:48.289 fused_ordering(939) 00:11:48.289 fused_ordering(940) 00:11:48.289 fused_ordering(941) 00:11:48.289 fused_ordering(942) 00:11:48.289 fused_ordering(943) 00:11:48.289 fused_ordering(944) 00:11:48.289 fused_ordering(945) 00:11:48.289 fused_ordering(946) 00:11:48.289 fused_ordering(947) 00:11:48.289 fused_ordering(948) 00:11:48.289 fused_ordering(949) 00:11:48.289 fused_ordering(950) 00:11:48.289 fused_ordering(951) 00:11:48.289 fused_ordering(952) 00:11:48.289 fused_ordering(953) 00:11:48.289 fused_ordering(954) 00:11:48.289 fused_ordering(955) 00:11:48.289 fused_ordering(956) 00:11:48.289 fused_ordering(957) 00:11:48.289 fused_ordering(958) 00:11:48.289 fused_ordering(959) 00:11:48.289 fused_ordering(960) 00:11:48.289 fused_ordering(961) 00:11:48.289 fused_ordering(962) 00:11:48.289 fused_ordering(963) 00:11:48.289 fused_ordering(964) 00:11:48.289 fused_ordering(965) 00:11:48.289 fused_ordering(966) 00:11:48.289 fused_ordering(967) 00:11:48.289 fused_ordering(968) 00:11:48.289 fused_ordering(969) 00:11:48.289 fused_ordering(970) 00:11:48.289 fused_ordering(971) 00:11:48.289 fused_ordering(972) 00:11:48.289 fused_ordering(973) 00:11:48.289 fused_ordering(974) 00:11:48.289 fused_ordering(975) 00:11:48.289 fused_ordering(976) 00:11:48.289 fused_ordering(977) 00:11:48.289 fused_ordering(978) 00:11:48.289 fused_ordering(979) 00:11:48.289 fused_ordering(980) 00:11:48.289 fused_ordering(981) 00:11:48.289 fused_ordering(982) 00:11:48.289 fused_ordering(983) 00:11:48.289 fused_ordering(984) 00:11:48.289 fused_ordering(985) 00:11:48.289 fused_ordering(986) 00:11:48.289 fused_ordering(987) 00:11:48.289 fused_ordering(988) 00:11:48.289 fused_ordering(989) 00:11:48.289 fused_ordering(990) 00:11:48.289 fused_ordering(991) 00:11:48.289 fused_ordering(992) 00:11:48.289 fused_ordering(993) 00:11:48.289 fused_ordering(994) 00:11:48.289 fused_ordering(995) 00:11:48.289 fused_ordering(996) 00:11:48.289 fused_ordering(997) 00:11:48.289 fused_ordering(998) 00:11:48.289 fused_ordering(999) 00:11:48.289 fused_ordering(1000) 00:11:48.289 fused_ordering(1001) 00:11:48.289 fused_ordering(1002) 00:11:48.289 fused_ordering(1003) 00:11:48.289 fused_ordering(1004) 00:11:48.289 fused_ordering(1005) 00:11:48.289 fused_ordering(1006) 00:11:48.289 fused_ordering(1007) 00:11:48.289 fused_ordering(1008) 00:11:48.289 fused_ordering(1009) 00:11:48.289 fused_ordering(1010) 00:11:48.289 fused_ordering(1011) 00:11:48.289 fused_ordering(1012) 00:11:48.289 fused_ordering(1013) 00:11:48.289 fused_ordering(1014) 00:11:48.289 fused_ordering(1015) 00:11:48.289 fused_ordering(1016) 00:11:48.289 fused_ordering(1017) 00:11:48.289 fused_ordering(1018) 00:11:48.289 fused_ordering(1019) 00:11:48.289 fused_ordering(1020) 00:11:48.289 fused_ordering(1021) 00:11:48.289 fused_ordering(1022) 00:11:48.289 fused_ordering(1023) 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.289 rmmod nvme_tcp 00:11:48.289 rmmod nvme_fabrics 00:11:48.289 rmmod nvme_keyring 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 509822 ']' 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 509822 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 509822 ']' 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 509822 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 509822 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 509822' 00:11:48.289 killing process with pid 509822 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 509822 00:11:48.289 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 509822 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.548 11:22:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.452 11:22:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.452 00:11:50.452 real 0m11.089s 00:11:50.452 user 0m5.541s 00:11:50.452 sys 0m5.872s 00:11:50.452 11:22:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.452 11:22:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.452 ************************************ 00:11:50.452 END TEST nvmf_fused_ordering 00:11:50.452 ************************************ 00:11:50.452 11:22:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.452 11:22:33 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:50.452 11:22:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.452 11:22:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.452 11:22:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.452 ************************************ 00:11:50.452 START TEST nvmf_delete_subsystem 00:11:50.452 ************************************ 00:11:50.452 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:50.712 * Looking for test storage... 00:11:50.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.712 11:22:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.327 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.328 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.328 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.328 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.328 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:57.328 00:11:57.328 --- 10.0.0.2 ping statistics --- 00:11:57.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.328 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:11:57.328 00:11:57.328 --- 10.0.0.1 ping statistics --- 00:11:57.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.328 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=513801 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 513801 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 513801 ']' 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.328 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.328 [2024-07-15 11:22:40.020192] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:57.328 [2024-07-15 11:22:40.020249] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.328 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.328 [2024-07-15 11:22:40.093159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:57.328 [2024-07-15 11:22:40.172590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.328 [2024-07-15 11:22:40.172624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.328 [2024-07-15 11:22:40.172632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.328 [2024-07-15 11:22:40.172637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.328 [2024-07-15 11:22:40.172643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.328 [2024-07-15 11:22:40.172695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.328 [2024-07-15 11:22:40.172697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.328 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 [2024-07-15 11:22:40.877229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 [2024-07-15 11:22:40.897364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 NULL1 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 Delay0 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=513852 00:11:57.587 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:57.588 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:57.588 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.588 [2024-07-15 11:22:40.988067] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:59.492 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.492 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.492 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 [2024-07-15 11:22:43.181821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bd7a0 is same with the state(5) to be set 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 starting I/O failed: -6 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 [2024-07-15 11:22:43.186312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a8000d450 is same with the state(5) to be set 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:11:59.750 Write completed with error (sct=0, sc=8) 00:11:59.750 Read completed with error (sct=0, sc=8) 00:12:00.685 [2024-07-15 11:22:44.164021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20beac0 is same with the state(5) to be set 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 [2024-07-15 11:22:44.185030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bd000 is same with the state(5) to be set 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 [2024-07-15 11:22:44.185232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bd5c0 is same with the state(5) to be set 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 [2024-07-15 11:22:44.188421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a8000cfe0 is same with the state(5) to be set 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 Write completed with error (sct=0, sc=8) 00:12:00.685 Read completed with error (sct=0, sc=8) 00:12:00.685 [2024-07-15 11:22:44.188532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2a8000d760 is same with the state(5) to be set 00:12:00.685 Initializing NVMe Controllers 00:12:00.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.685 Controller IO queue size 128, less than required. 00:12:00.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:00.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:00.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:00.686 Initialization complete. Launching workers. 00:12:00.686 ======================================================== 00:12:00.686 Latency(us) 00:12:00.686 Device Information : IOPS MiB/s Average min max 00:12:00.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.91 0.08 898840.90 287.51 1005992.36 00:12:00.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.44 0.08 932679.93 230.62 2001275.32 00:12:00.686 ======================================================== 00:12:00.686 Total : 328.35 0.16 915375.30 230.62 2001275.32 00:12:00.686 00:12:00.686 [2024-07-15 11:22:44.189022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20beac0 (9): Bad file descriptor 00:12:00.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:00.686 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.686 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:00.686 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 513852 00:12:00.686 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 513852 00:12:01.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (513852) - No such process 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 513852 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 513852 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 513852 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.253 [2024-07-15 11:22:44.714748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=514537 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:01.253 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:01.253 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.254 [2024-07-15 11:22:44.790400] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:01.820 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:01.820 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:01.820 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:02.386 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:02.386 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:02.386 11:22:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:02.951 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:02.951 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:02.951 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:03.208 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:03.208 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:03.208 11:22:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:03.790 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:03.790 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:03.790 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.353 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.353 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:04.353 11:22:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.611 Initializing NVMe Controllers 00:12:04.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.611 Controller IO queue size 128, less than required. 00:12:04.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:04.611 Initialization complete. Launching workers. 00:12:04.611 ======================================================== 00:12:04.611 Latency(us) 00:12:04.611 Device Information : IOPS MiB/s Average min max 00:12:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002475.09 1000130.15 1040593.62 00:12:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003446.98 1000191.82 1041591.27 00:12:04.611 ======================================================== 00:12:04.611 Total : 256.00 0.12 1002961.03 1000130.15 1041591.27 00:12:04.611 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 514537 00:12:04.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (514537) - No such process 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 514537 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.869 rmmod nvme_tcp 00:12:04.869 rmmod nvme_fabrics 00:12:04.869 rmmod nvme_keyring 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 513801 ']' 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 513801 00:12:04.869 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 513801 ']' 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 513801 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 513801 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 513801' 00:12:04.870 killing process with pid 513801 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 513801 00:12:04.870 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 513801 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.128 11:22:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.662 11:22:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.662 00:12:07.662 real 0m16.605s 00:12:07.662 user 0m30.710s 00:12:07.662 sys 0m5.348s 00:12:07.662 11:22:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.662 11:22:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.662 ************************************ 00:12:07.662 END TEST nvmf_delete_subsystem 00:12:07.662 ************************************ 00:12:07.662 11:22:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.662 11:22:50 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:07.662 11:22:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.662 11:22:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.662 11:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.662 ************************************ 00:12:07.662 START TEST nvmf_ns_masking 00:12:07.662 ************************************ 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:07.662 * Looking for test storage... 00:12:07.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.662 11:22:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=abbf3b50-22ef-453a-80c5-1d5832f8113c 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=26fc2c55-0144-4b59-9d66-908e0a95f707 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4bcbae95-f237-4830-8ce8-23659d7a7727 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.663 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:12.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:12.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:12.932 Found net devices under 0000:86:00.0: cvl_0_0 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:12.932 Found net devices under 0000:86:00.1: cvl_0_1 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:12.932 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:12:13.191 00:12:13.191 --- 10.0.0.2 ping statistics --- 00:12:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.191 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:12:13.191 00:12:13.191 --- 10.0.0.1 ping statistics --- 00:12:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.191 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=518641 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 518641 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 518641 ']' 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.191 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.192 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.192 11:22:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:13.192 [2024-07-15 11:22:56.659043] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:13.192 [2024-07-15 11:22:56.659090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.192 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.192 [2024-07-15 11:22:56.732355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.451 [2024-07-15 11:22:56.812952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.451 [2024-07-15 11:22:56.812986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.451 [2024-07-15 11:22:56.812993] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.451 [2024-07-15 11:22:56.812999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.451 [2024-07-15 11:22:56.813004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.451 [2024-07-15 11:22:56.813027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.017 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:14.277 [2024-07-15 11:22:57.651361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.277 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:14.277 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:14.277 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:14.277 Malloc1 00:12:14.537 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:14.537 Malloc2 00:12:14.537 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.797 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:15.056 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.056 [2024-07-15 11:22:58.609623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.056 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:15.056 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4bcbae95-f237-4830-8ce8-23659d7a7727 -a 10.0.0.2 -s 4420 -i 4 00:12:15.314 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.314 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.314 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.314 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.314 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:17.273 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:17.531 [ 0]:0x1 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7d0383bff8645a883dcb2277d0f3961 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7d0383bff8645a883dcb2277d0f3961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.531 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.791 [ 0]:0x1 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7d0383bff8645a883dcb2277d0f3961 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7d0383bff8645a883dcb2277d0f3961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:17.791 [ 1]:0x2 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.791 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.050 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4bcbae95-f237-4830-8ce8-23659d7a7727 -a 10.0.0.2 -s 4420 -i 4 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:18.309 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:20.843 [ 0]:0x2 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.843 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:20.843 [ 0]:0x1 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7d0383bff8645a883dcb2277d0f3961 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7d0383bff8645a883dcb2277d0f3961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:20.843 [ 1]:0x2 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.843 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:21.101 [ 0]:0x2 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.101 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:21.102 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.102 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:21.102 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.102 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4bcbae95-f237-4830-8ce8-23659d7a7727 -a 10.0.0.2 -s 4420 -i 4 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:21.361 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.897 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.898 [ 0]:0x1 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f7d0383bff8645a883dcb2277d0f3961 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f7d0383bff8645a883dcb2277d0f3961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.898 11:23:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.898 [ 1]:0x2 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.898 [ 0]:0x2 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:23.898 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:24.158 [2024-07-15 11:23:07.502846] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:24.158 request: 00:12:24.158 { 00:12:24.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.158 "nsid": 2, 00:12:24.158 "host": "nqn.2016-06.io.spdk:host1", 00:12:24.158 "method": "nvmf_ns_remove_host", 00:12:24.158 "req_id": 1 00:12:24.158 } 00:12:24.158 Got JSON-RPC error response 00:12:24.158 response: 00:12:24.158 { 00:12:24.158 "code": -32602, 00:12:24.158 "message": "Invalid parameters" 00:12:24.158 } 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:24.158 [ 0]:0x2 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f682a171f45d486db4130938d412cf9c 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f682a171f45d486db4130938d412cf9c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=520572 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 520572 /var/tmp/host.sock 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 520572 ']' 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:24.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.158 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.418 [2024-07-15 11:23:07.771727] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:24.418 [2024-07-15 11:23:07.771772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520572 ] 00:12:24.418 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.418 [2024-07-15 11:23:07.838466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.418 [2024-07-15 11:23:07.911611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.987 11:23:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.246 11:23:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:25.246 11:23:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.246 11:23:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.506 11:23:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid abbf3b50-22ef-453a-80c5-1d5832f8113c 00:12:25.506 11:23:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:25.506 11:23:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ABBF3B5022EF453A80C51D5832F8113C -i 00:12:25.765 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 26fc2c55-0144-4b59-9d66-908e0a95f707 00:12:25.765 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:25.765 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 26FC2C5501444B599D66908E0A95F707 -i 00:12:25.765 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:26.024 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:26.284 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:26.284 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:26.543 nvme0n1 00:12:26.543 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:26.543 11:23:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:26.801 nvme1n2 00:12:26.801 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:26.801 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:26.801 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:26.801 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:26.801 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:27.060 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:27.060 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:27.060 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:27.060 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:27.320 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ abbf3b50-22ef-453a-80c5-1d5832f8113c == \a\b\b\f\3\b\5\0\-\2\2\e\f\-\4\5\3\a\-\8\0\c\5\-\1\d\5\8\3\2\f\8\1\1\3\c ]] 00:12:27.320 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:27.320 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:27.320 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 26fc2c55-0144-4b59-9d66-908e0a95f707 == \2\6\f\c\2\c\5\5\-\0\1\4\4\-\4\b\5\9\-\9\d\6\6\-\9\0\8\e\0\a\9\5\f\7\0\7 ]] 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 520572 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 520572 ']' 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 520572 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 520572 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:27.579 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 520572' 00:12:27.579 killing process with pid 520572 00:12:27.580 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 520572 00:12:27.580 11:23:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 520572 00:12:27.839 11:23:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.098 rmmod nvme_tcp 00:12:28.098 rmmod nvme_fabrics 00:12:28.098 rmmod nvme_keyring 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 518641 ']' 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 518641 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 518641 ']' 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 518641 00:12:28.098 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 518641 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 518641' 00:12:28.099 killing process with pid 518641 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 518641 00:12:28.099 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 518641 00:12:28.358 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.358 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.359 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.265 11:23:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.525 00:12:30.525 real 0m23.153s 00:12:30.525 user 0m24.963s 00:12:30.525 sys 0m6.468s 00:12:30.525 11:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.525 11:23:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:30.525 ************************************ 00:12:30.525 END TEST nvmf_ns_masking 00:12:30.525 ************************************ 00:12:30.525 11:23:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:30.525 11:23:13 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:30.525 11:23:13 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:30.525 11:23:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.525 11:23:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.525 11:23:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.525 ************************************ 00:12:30.525 START TEST nvmf_nvme_cli 00:12:30.525 ************************************ 00:12:30.525 11:23:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:30.525 * Looking for test storage... 00:12:30.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:30.525 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.526 11:23:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.151 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.151 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.151 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:12:37.151 00:12:37.151 --- 10.0.0.2 ping statistics --- 00:12:37.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.151 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:37.151 00:12:37.151 --- 10.0.0.1 ping statistics --- 00:12:37.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.151 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.151 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=524791 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 524791 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 524791 ']' 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.152 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.152 [2024-07-15 11:23:19.898504] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:37.152 [2024-07-15 11:23:19.898544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.152 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.152 [2024-07-15 11:23:19.971765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.152 [2024-07-15 11:23:20.058798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.152 [2024-07-15 11:23:20.058835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.152 [2024-07-15 11:23:20.058842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.152 [2024-07-15 11:23:20.058848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.152 [2024-07-15 11:23:20.058853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.152 [2024-07-15 11:23:20.058901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.152 [2024-07-15 11:23:20.058933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.152 [2024-07-15 11:23:20.059038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.152 [2024-07-15 11:23:20.059038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.152 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.152 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:37.152 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.152 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.152 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 [2024-07-15 11:23:20.750024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 Malloc0 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 Malloc1 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 [2024-07-15 11:23:20.827810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:37.411 00:12:37.411 Discovery Log Number of Records 2, Generation counter 2 00:12:37.411 =====Discovery Log Entry 0====== 00:12:37.411 trtype: tcp 00:12:37.411 adrfam: ipv4 00:12:37.411 subtype: current discovery subsystem 00:12:37.411 treq: not required 00:12:37.411 portid: 0 00:12:37.411 trsvcid: 4420 00:12:37.411 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.411 traddr: 10.0.0.2 00:12:37.411 eflags: explicit discovery connections, duplicate discovery information 00:12:37.411 sectype: none 00:12:37.411 =====Discovery Log Entry 1====== 00:12:37.411 trtype: tcp 00:12:37.411 adrfam: ipv4 00:12:37.411 subtype: nvme subsystem 00:12:37.411 treq: not required 00:12:37.411 portid: 0 00:12:37.411 trsvcid: 4420 00:12:37.411 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:37.411 traddr: 10.0.0.2 00:12:37.411 eflags: none 00:12:37.411 sectype: none 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:37.411 11:23:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:38.789 11:23:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:40.701 /dev/nvme0n1 ]] 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.701 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:40.960 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.219 rmmod nvme_tcp 00:12:41.219 rmmod nvme_fabrics 00:12:41.219 rmmod nvme_keyring 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 524791 ']' 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 524791 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 524791 ']' 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 524791 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 524791 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 524791' 00:12:41.219 killing process with pid 524791 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 524791 00:12:41.219 11:23:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 524791 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.479 11:23:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.017 11:23:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.017 00:12:44.017 real 0m13.159s 00:12:44.017 user 0m21.412s 00:12:44.017 sys 0m4.965s 00:12:44.017 11:23:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.017 11:23:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 ************************************ 00:12:44.017 END TEST nvmf_nvme_cli 00:12:44.017 ************************************ 00:12:44.017 11:23:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:44.017 11:23:27 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:44.017 11:23:27 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:44.017 11:23:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:44.017 11:23:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.017 11:23:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.017 ************************************ 00:12:44.017 START TEST nvmf_vfio_user 00:12:44.017 ************************************ 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:44.017 * Looking for test storage... 00:12:44.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.017 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=526084 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 526084' 00:12:44.018 Process pid: 526084 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 526084 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 526084 ']' 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.018 11:23:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:44.018 [2024-07-15 11:23:27.337684] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:44.018 [2024-07-15 11:23:27.337730] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.018 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.018 [2024-07-15 11:23:27.404986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.018 [2024-07-15 11:23:27.489375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.018 [2024-07-15 11:23:27.489406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.018 [2024-07-15 11:23:27.489413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.018 [2024-07-15 11:23:27.489419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.018 [2024-07-15 11:23:27.489424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.018 [2024-07-15 11:23:27.489475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.018 [2024-07-15 11:23:27.489581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.018 [2024-07-15 11:23:27.489686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.018 [2024-07-15 11:23:27.489687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.585 11:23:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.585 11:23:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:44.585 11:23:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:45.964 Malloc1 00:12:45.964 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:46.222 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:46.480 11:23:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:46.739 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:46.739 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:46.739 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:46.739 Malloc2 00:12:46.739 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:46.997 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:47.254 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:47.514 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:47.514 [2024-07-15 11:23:30.868052] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:47.514 [2024-07-15 11:23:30.868088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526788 ] 00:12:47.514 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.514 [2024-07-15 11:23:30.896758] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:47.514 [2024-07-15 11:23:30.906602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.514 [2024-07-15 11:23:30.906625] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa3167f9000 00:12:47.514 [2024-07-15 11:23:30.907600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.908600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.909610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.910617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.911627] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.912626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.913633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.914633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.514 [2024-07-15 11:23:30.915646] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.514 [2024-07-15 11:23:30.915655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa3167ee000 00:12:47.514 [2024-07-15 11:23:30.916598] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.514 [2024-07-15 11:23:30.925203] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:47.514 [2024-07-15 11:23:30.925229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:47.514 [2024-07-15 11:23:30.930742] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:47.514 [2024-07-15 11:23:30.930779] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:47.514 [2024-07-15 11:23:30.930848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:47.515 [2024-07-15 11:23:30.930863] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:47.515 [2024-07-15 11:23:30.930868] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:47.515 [2024-07-15 11:23:30.931741] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:47.515 [2024-07-15 11:23:30.931749] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:47.515 [2024-07-15 11:23:30.931755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:47.515 [2024-07-15 11:23:30.932748] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:47.515 [2024-07-15 11:23:30.932755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:47.515 [2024-07-15 11:23:30.932761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.933751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:47.515 [2024-07-15 11:23:30.933759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.934756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:47.515 [2024-07-15 11:23:30.934763] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:47.515 [2024-07-15 11:23:30.934767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.934773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.934880] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:47.515 [2024-07-15 11:23:30.934885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.934889] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:47.515 [2024-07-15 11:23:30.935763] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:47.515 [2024-07-15 11:23:30.936771] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:47.515 [2024-07-15 11:23:30.937770] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:47.515 [2024-07-15 11:23:30.938768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.515 [2024-07-15 11:23:30.938831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:47.515 [2024-07-15 11:23:30.939783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:47.515 [2024-07-15 11:23:30.939790] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:47.515 [2024-07-15 11:23:30.939794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.939811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:47.515 [2024-07-15 11:23:30.939821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.939835] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.515 [2024-07-15 11:23:30.939840] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.515 [2024-07-15 11:23:30.939852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.939894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.939903] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:47.515 [2024-07-15 11:23:30.939912] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:47.515 [2024-07-15 11:23:30.939916] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:47.515 [2024-07-15 11:23:30.939919] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:47.515 [2024-07-15 11:23:30.939924] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:47.515 [2024-07-15 11:23:30.939927] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:47.515 [2024-07-15 11:23:30.939931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.939938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.939947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.939960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.939971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.515 [2024-07-15 11:23:30.939979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.515 [2024-07-15 11:23:30.939986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.515 [2024-07-15 11:23:30.939994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.515 [2024-07-15 11:23:30.939998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.940029] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:47.515 [2024-07-15 11:23:30.940033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.940112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940125] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:47.515 [2024-07-15 11:23:30.940129] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:47.515 [2024-07-15 11:23:30.940135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.940157] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:47.515 [2024-07-15 11:23:30.940164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940177] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.515 [2024-07-15 11:23:30.940248] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.515 [2024-07-15 11:23:30.940254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.940283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.515 [2024-07-15 11:23:30.940299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.515 [2024-07-15 11:23:30.940305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:47.515 [2024-07-15 11:23:30.940322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940352] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:47.515 [2024-07-15 11:23:30.940356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:47.515 [2024-07-15 11:23:30.940360] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:47.515 [2024-07-15 11:23:30.940377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:47.515 [2024-07-15 11:23:30.940386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940456] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:47.516 [2024-07-15 11:23:30.940460] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:47.516 [2024-07-15 11:23:30.940463] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:47.516 [2024-07-15 11:23:30.940466] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:47.516 [2024-07-15 11:23:30.940471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:47.516 [2024-07-15 11:23:30.940478] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:47.516 [2024-07-15 11:23:30.940482] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:47.516 [2024-07-15 11:23:30.940487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940493] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:47.516 [2024-07-15 11:23:30.940497] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.516 [2024-07-15 11:23:30.940502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940508] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:47.516 [2024-07-15 11:23:30.940512] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:47.516 [2024-07-15 11:23:30.940517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:47.516 [2024-07-15 11:23:30.940523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:47.516 [2024-07-15 11:23:30.940549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:47.516 ===================================================== 00:12:47.516 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:47.516 ===================================================== 00:12:47.516 Controller Capabilities/Features 00:12:47.516 ================================ 00:12:47.516 Vendor ID: 4e58 00:12:47.516 Subsystem Vendor ID: 4e58 00:12:47.516 Serial Number: SPDK1 00:12:47.516 Model Number: SPDK bdev Controller 00:12:47.516 Firmware Version: 24.09 00:12:47.516 Recommended Arb Burst: 6 00:12:47.516 IEEE OUI Identifier: 8d 6b 50 00:12:47.516 Multi-path I/O 00:12:47.516 May have multiple subsystem ports: Yes 00:12:47.516 May have multiple controllers: Yes 00:12:47.516 Associated with SR-IOV VF: No 00:12:47.516 Max Data Transfer Size: 131072 00:12:47.516 Max Number of Namespaces: 32 00:12:47.516 Max Number of I/O Queues: 127 00:12:47.516 NVMe Specification Version (VS): 1.3 00:12:47.516 NVMe Specification Version (Identify): 1.3 00:12:47.516 Maximum Queue Entries: 256 00:12:47.516 Contiguous Queues Required: Yes 00:12:47.516 Arbitration Mechanisms Supported 00:12:47.516 Weighted Round Robin: Not Supported 00:12:47.516 Vendor Specific: Not Supported 00:12:47.516 Reset Timeout: 15000 ms 00:12:47.516 Doorbell Stride: 4 bytes 00:12:47.516 NVM Subsystem Reset: Not Supported 00:12:47.516 Command Sets Supported 00:12:47.516 NVM Command Set: Supported 00:12:47.516 Boot Partition: Not Supported 00:12:47.516 Memory Page Size Minimum: 4096 bytes 00:12:47.516 Memory Page Size Maximum: 4096 bytes 00:12:47.516 Persistent Memory Region: Not Supported 00:12:47.516 Optional Asynchronous Events Supported 00:12:47.516 Namespace Attribute Notices: Supported 00:12:47.516 Firmware Activation Notices: Not Supported 00:12:47.516 ANA Change Notices: Not Supported 00:12:47.516 PLE Aggregate Log Change Notices: Not Supported 00:12:47.516 LBA Status Info Alert Notices: Not Supported 00:12:47.516 EGE Aggregate Log Change Notices: Not Supported 00:12:47.516 Normal NVM Subsystem Shutdown event: Not Supported 00:12:47.516 Zone Descriptor Change Notices: Not Supported 00:12:47.516 Discovery Log Change Notices: Not Supported 00:12:47.516 Controller Attributes 00:12:47.516 128-bit Host Identifier: Supported 00:12:47.516 Non-Operational Permissive Mode: Not Supported 00:12:47.516 NVM Sets: Not Supported 00:12:47.516 Read Recovery Levels: Not Supported 00:12:47.516 Endurance Groups: Not Supported 00:12:47.516 Predictable Latency Mode: Not Supported 00:12:47.516 Traffic Based Keep ALive: Not Supported 00:12:47.516 Namespace Granularity: Not Supported 00:12:47.516 SQ Associations: Not Supported 00:12:47.516 UUID List: Not Supported 00:12:47.516 Multi-Domain Subsystem: Not Supported 00:12:47.516 Fixed Capacity Management: Not Supported 00:12:47.516 Variable Capacity Management: Not Supported 00:12:47.516 Delete Endurance Group: Not Supported 00:12:47.516 Delete NVM Set: Not Supported 00:12:47.516 Extended LBA Formats Supported: Not Supported 00:12:47.516 Flexible Data Placement Supported: Not Supported 00:12:47.516 00:12:47.516 Controller Memory Buffer Support 00:12:47.516 ================================ 00:12:47.516 Supported: No 00:12:47.516 00:12:47.516 Persistent Memory Region Support 00:12:47.516 ================================ 00:12:47.516 Supported: No 00:12:47.516 00:12:47.516 Admin Command Set Attributes 00:12:47.516 ============================ 00:12:47.516 Security Send/Receive: Not Supported 00:12:47.516 Format NVM: Not Supported 00:12:47.516 Firmware Activate/Download: Not Supported 00:12:47.516 Namespace Management: Not Supported 00:12:47.516 Device Self-Test: Not Supported 00:12:47.516 Directives: Not Supported 00:12:47.516 NVMe-MI: Not Supported 00:12:47.516 Virtualization Management: Not Supported 00:12:47.516 Doorbell Buffer Config: Not Supported 00:12:47.516 Get LBA Status Capability: Not Supported 00:12:47.516 Command & Feature Lockdown Capability: Not Supported 00:12:47.516 Abort Command Limit: 4 00:12:47.516 Async Event Request Limit: 4 00:12:47.516 Number of Firmware Slots: N/A 00:12:47.516 Firmware Slot 1 Read-Only: N/A 00:12:47.516 Firmware Activation Without Reset: N/A 00:12:47.516 Multiple Update Detection Support: N/A 00:12:47.516 Firmware Update Granularity: No Information Provided 00:12:47.516 Per-Namespace SMART Log: No 00:12:47.516 Asymmetric Namespace Access Log Page: Not Supported 00:12:47.516 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:47.516 Command Effects Log Page: Supported 00:12:47.516 Get Log Page Extended Data: Supported 00:12:47.516 Telemetry Log Pages: Not Supported 00:12:47.516 Persistent Event Log Pages: Not Supported 00:12:47.516 Supported Log Pages Log Page: May Support 00:12:47.516 Commands Supported & Effects Log Page: Not Supported 00:12:47.516 Feature Identifiers & Effects Log Page:May Support 00:12:47.516 NVMe-MI Commands & Effects Log Page: May Support 00:12:47.516 Data Area 4 for Telemetry Log: Not Supported 00:12:47.516 Error Log Page Entries Supported: 128 00:12:47.516 Keep Alive: Supported 00:12:47.516 Keep Alive Granularity: 10000 ms 00:12:47.516 00:12:47.516 NVM Command Set Attributes 00:12:47.516 ========================== 00:12:47.516 Submission Queue Entry Size 00:12:47.516 Max: 64 00:12:47.516 Min: 64 00:12:47.516 Completion Queue Entry Size 00:12:47.516 Max: 16 00:12:47.516 Min: 16 00:12:47.516 Number of Namespaces: 32 00:12:47.516 Compare Command: Supported 00:12:47.516 Write Uncorrectable Command: Not Supported 00:12:47.516 Dataset Management Command: Supported 00:12:47.516 Write Zeroes Command: Supported 00:12:47.516 Set Features Save Field: Not Supported 00:12:47.516 Reservations: Not Supported 00:12:47.516 Timestamp: Not Supported 00:12:47.516 Copy: Supported 00:12:47.516 Volatile Write Cache: Present 00:12:47.516 Atomic Write Unit (Normal): 1 00:12:47.516 Atomic Write Unit (PFail): 1 00:12:47.516 Atomic Compare & Write Unit: 1 00:12:47.516 Fused Compare & Write: Supported 00:12:47.516 Scatter-Gather List 00:12:47.516 SGL Command Set: Supported (Dword aligned) 00:12:47.516 SGL Keyed: Not Supported 00:12:47.516 SGL Bit Bucket Descriptor: Not Supported 00:12:47.516 SGL Metadata Pointer: Not Supported 00:12:47.516 Oversized SGL: Not Supported 00:12:47.516 SGL Metadata Address: Not Supported 00:12:47.516 SGL Offset: Not Supported 00:12:47.516 Transport SGL Data Block: Not Supported 00:12:47.516 Replay Protected Memory Block: Not Supported 00:12:47.516 00:12:47.516 Firmware Slot Information 00:12:47.516 ========================= 00:12:47.516 Active slot: 1 00:12:47.516 Slot 1 Firmware Revision: 24.09 00:12:47.516 00:12:47.516 00:12:47.516 Commands Supported and Effects 00:12:47.516 ============================== 00:12:47.516 Admin Commands 00:12:47.516 -------------- 00:12:47.516 Get Log Page (02h): Supported 00:12:47.516 Identify (06h): Supported 00:12:47.516 Abort (08h): Supported 00:12:47.516 Set Features (09h): Supported 00:12:47.517 Get Features (0Ah): Supported 00:12:47.517 Asynchronous Event Request (0Ch): Supported 00:12:47.517 Keep Alive (18h): Supported 00:12:47.517 I/O Commands 00:12:47.517 ------------ 00:12:47.517 Flush (00h): Supported LBA-Change 00:12:47.517 Write (01h): Supported LBA-Change 00:12:47.517 Read (02h): Supported 00:12:47.517 Compare (05h): Supported 00:12:47.517 Write Zeroes (08h): Supported LBA-Change 00:12:47.517 Dataset Management (09h): Supported LBA-Change 00:12:47.517 Copy (19h): Supported LBA-Change 00:12:47.517 00:12:47.517 Error Log 00:12:47.517 ========= 00:12:47.517 00:12:47.517 Arbitration 00:12:47.517 =========== 00:12:47.517 Arbitration Burst: 1 00:12:47.517 00:12:47.517 Power Management 00:12:47.517 ================ 00:12:47.517 Number of Power States: 1 00:12:47.517 Current Power State: Power State #0 00:12:47.517 Power State #0: 00:12:47.517 Max Power: 0.00 W 00:12:47.517 Non-Operational State: Operational 00:12:47.517 Entry Latency: Not Reported 00:12:47.517 Exit Latency: Not Reported 00:12:47.517 Relative Read Throughput: 0 00:12:47.517 Relative Read Latency: 0 00:12:47.517 Relative Write Throughput: 0 00:12:47.517 Relative Write Latency: 0 00:12:47.517 Idle Power: Not Reported 00:12:47.517 Active Power: Not Reported 00:12:47.517 Non-Operational Permissive Mode: Not Supported 00:12:47.517 00:12:47.517 Health Information 00:12:47.517 ================== 00:12:47.517 Critical Warnings: 00:12:47.517 Available Spare Space: OK 00:12:47.517 Temperature: OK 00:12:47.517 Device Reliability: OK 00:12:47.517 Read Only: No 00:12:47.517 Volatile Memory Backup: OK 00:12:47.517 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:47.517 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:47.517 Available Spare: 0% 00:12:47.517 Available Sp[2024-07-15 11:23:30.940636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:47.517 [2024-07-15 11:23:30.940650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:47.517 [2024-07-15 11:23:30.940675] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:47.517 [2024-07-15 11:23:30.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.517 [2024-07-15 11:23:30.940689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.517 [2024-07-15 11:23:30.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.517 [2024-07-15 11:23:30.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.517 [2024-07-15 11:23:30.944231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:47.517 [2024-07-15 11:23:30.944242] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:47.517 [2024-07-15 11:23:30.944804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.517 [2024-07-15 11:23:30.944852] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:47.517 [2024-07-15 11:23:30.944858] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:47.517 [2024-07-15 11:23:30.945808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:47.517 [2024-07-15 11:23:30.945818] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:47.517 [2024-07-15 11:23:30.945867] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:47.517 [2024-07-15 11:23:30.947838] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.517 are Threshold: 0% 00:12:47.517 Life Percentage Used: 0% 00:12:47.517 Data Units Read: 0 00:12:47.517 Data Units Written: 0 00:12:47.517 Host Read Commands: 0 00:12:47.517 Host Write Commands: 0 00:12:47.517 Controller Busy Time: 0 minutes 00:12:47.517 Power Cycles: 0 00:12:47.517 Power On Hours: 0 hours 00:12:47.517 Unsafe Shutdowns: 0 00:12:47.517 Unrecoverable Media Errors: 0 00:12:47.517 Lifetime Error Log Entries: 0 00:12:47.517 Warning Temperature Time: 0 minutes 00:12:47.517 Critical Temperature Time: 0 minutes 00:12:47.517 00:12:47.517 Number of Queues 00:12:47.517 ================ 00:12:47.517 Number of I/O Submission Queues: 127 00:12:47.517 Number of I/O Completion Queues: 127 00:12:47.517 00:12:47.517 Active Namespaces 00:12:47.517 ================= 00:12:47.517 Namespace ID:1 00:12:47.517 Error Recovery Timeout: Unlimited 00:12:47.517 Command Set Identifier: NVM (00h) 00:12:47.517 Deallocate: Supported 00:12:47.517 Deallocated/Unwritten Error: Not Supported 00:12:47.517 Deallocated Read Value: Unknown 00:12:47.517 Deallocate in Write Zeroes: Not Supported 00:12:47.517 Deallocated Guard Field: 0xFFFF 00:12:47.517 Flush: Supported 00:12:47.517 Reservation: Supported 00:12:47.517 Namespace Sharing Capabilities: Multiple Controllers 00:12:47.517 Size (in LBAs): 131072 (0GiB) 00:12:47.517 Capacity (in LBAs): 131072 (0GiB) 00:12:47.517 Utilization (in LBAs): 131072 (0GiB) 00:12:47.517 NGUID: 9C6B8CA9DD3E437B8C867F8ADA036AA7 00:12:47.517 UUID: 9c6b8ca9-dd3e-437b-8c86-7f8ada036aa7 00:12:47.517 Thin Provisioning: Not Supported 00:12:47.517 Per-NS Atomic Units: Yes 00:12:47.517 Atomic Boundary Size (Normal): 0 00:12:47.517 Atomic Boundary Size (PFail): 0 00:12:47.517 Atomic Boundary Offset: 0 00:12:47.517 Maximum Single Source Range Length: 65535 00:12:47.517 Maximum Copy Length: 65535 00:12:47.517 Maximum Source Range Count: 1 00:12:47.517 NGUID/EUI64 Never Reused: No 00:12:47.517 Namespace Write Protected: No 00:12:47.517 Number of LBA Formats: 1 00:12:47.517 Current LBA Format: LBA Format #00 00:12:47.517 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.517 00:12:47.517 11:23:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:47.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.776 [2024-07-15 11:23:31.163987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.043 Initializing NVMe Controllers 00:12:53.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:53.043 Initialization complete. Launching workers. 00:12:53.043 ======================================================== 00:12:53.043 Latency(us) 00:12:53.043 Device Information : IOPS MiB/s Average min max 00:12:53.043 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39862.72 155.71 3210.83 955.36 10611.13 00:12:53.043 ======================================================== 00:12:53.043 Total : 39862.72 155.71 3210.83 955.36 10611.13 00:12:53.043 00:12:53.043 [2024-07-15 11:23:36.183920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.043 11:23:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:53.043 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.043 [2024-07-15 11:23:36.395919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.322 Initializing NVMe Controllers 00:12:58.322 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.322 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:58.322 Initialization complete. Launching workers. 00:12:58.322 ======================================================== 00:12:58.322 Latency(us) 00:12:58.322 Device Information : IOPS MiB/s Average min max 00:12:58.322 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.28 62.65 7979.72 6979.14 8988.02 00:12:58.322 ======================================================== 00:12:58.322 Total : 16039.28 62.65 7979.72 6979.14 8988.02 00:12:58.322 00:12:58.322 [2024-07-15 11:23:41.432033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.322 11:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:58.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.322 [2024-07-15 11:23:41.627982] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:03.634 [2024-07-15 11:23:46.695481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.634 Initializing NVMe Controllers 00:13:03.634 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:03.634 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:03.634 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:03.634 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:03.634 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:03.634 Initialization complete. Launching workers. 00:13:03.634 Starting thread on core 2 00:13:03.634 Starting thread on core 3 00:13:03.634 Starting thread on core 1 00:13:03.634 11:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:03.634 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.634 [2024-07-15 11:23:46.976594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.922 [2024-07-15 11:23:50.037337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.922 Initializing NVMe Controllers 00:13:06.923 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.923 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:06.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:06.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:06.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:06.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:06.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:06.923 Initialization complete. Launching workers. 00:13:06.923 Starting thread on core 1 with urgent priority queue 00:13:06.923 Starting thread on core 2 with urgent priority queue 00:13:06.923 Starting thread on core 3 with urgent priority queue 00:13:06.923 Starting thread on core 0 with urgent priority queue 00:13:06.923 SPDK bdev Controller (SPDK1 ) core 0: 9589.67 IO/s 10.43 secs/100000 ios 00:13:06.923 SPDK bdev Controller (SPDK1 ) core 1: 9587.00 IO/s 10.43 secs/100000 ios 00:13:06.923 SPDK bdev Controller (SPDK1 ) core 2: 7344.00 IO/s 13.62 secs/100000 ios 00:13:06.923 SPDK bdev Controller (SPDK1 ) core 3: 8372.00 IO/s 11.94 secs/100000 ios 00:13:06.923 ======================================================== 00:13:06.923 00:13:06.923 11:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:06.923 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.923 [2024-07-15 11:23:50.310770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.923 Initializing NVMe Controllers 00:13:06.923 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.923 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.923 Namespace ID: 1 size: 0GB 00:13:06.923 Initialization complete. 00:13:06.923 INFO: using host memory buffer for IO 00:13:06.923 Hello world! 00:13:06.923 [2024-07-15 11:23:50.344978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.923 11:23:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:06.923 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.180 [2024-07-15 11:23:50.609164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.114 Initializing NVMe Controllers 00:13:08.114 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.114 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.114 Initialization complete. Launching workers. 00:13:08.114 submit (in ns) avg, min, max = 7413.8, 3227.8, 4003447.8 00:13:08.114 complete (in ns) avg, min, max = 20498.8, 1760.9, 4186698.3 00:13:08.114 00:13:08.114 Submit histogram 00:13:08.114 ================ 00:13:08.114 Range in us Cumulative Count 00:13:08.114 3.228 - 3.242: 0.0184% ( 3) 00:13:08.114 3.242 - 3.256: 0.0368% ( 3) 00:13:08.114 3.256 - 3.270: 0.0675% ( 5) 00:13:08.114 3.270 - 3.283: 0.1780% ( 18) 00:13:08.114 3.283 - 3.297: 0.4541% ( 45) 00:13:08.114 3.297 - 3.311: 1.2151% ( 124) 00:13:08.114 3.311 - 3.325: 2.2154% ( 163) 00:13:08.114 3.325 - 3.339: 3.7496% ( 250) 00:13:08.114 3.339 - 3.353: 6.6830% ( 478) 00:13:08.114 3.353 - 3.367: 11.3470% ( 760) 00:13:08.114 3.367 - 3.381: 16.8457% ( 896) 00:13:08.114 3.381 - 3.395: 23.2464% ( 1043) 00:13:08.114 3.395 - 3.409: 28.9905% ( 936) 00:13:08.114 3.409 - 3.423: 34.5812% ( 911) 00:13:08.114 3.423 - 3.437: 39.6809% ( 831) 00:13:08.114 3.437 - 3.450: 45.0752% ( 879) 00:13:08.114 3.450 - 3.464: 50.1319% ( 824) 00:13:08.114 3.464 - 3.478: 54.0104% ( 632) 00:13:08.114 3.478 - 3.492: 58.6622% ( 758) 00:13:08.114 3.492 - 3.506: 64.9033% ( 1017) 00:13:08.114 3.506 - 3.520: 70.1197% ( 850) 00:13:08.114 3.520 - 3.534: 74.3050% ( 682) 00:13:08.114 3.534 - 3.548: 78.3921% ( 666) 00:13:08.114 3.548 - 3.562: 82.1172% ( 607) 00:13:08.114 3.562 - 3.590: 86.3639% ( 692) 00:13:08.114 3.590 - 3.617: 87.6527% ( 210) 00:13:08.114 3.617 - 3.645: 88.6161% ( 157) 00:13:08.114 3.645 - 3.673: 90.0583% ( 235) 00:13:08.114 3.673 - 3.701: 91.7153% ( 270) 00:13:08.114 3.701 - 3.729: 93.3783% ( 271) 00:13:08.114 3.729 - 3.757: 95.1028% ( 281) 00:13:08.114 3.757 - 3.784: 96.6554% ( 253) 00:13:08.114 3.784 - 3.812: 97.9380% ( 209) 00:13:08.114 3.812 - 3.840: 98.7358% ( 130) 00:13:08.114 3.840 - 3.868: 99.2145% ( 78) 00:13:08.114 3.868 - 3.896: 99.5091% ( 48) 00:13:08.114 3.896 - 3.923: 99.6379% ( 21) 00:13:08.114 3.923 - 3.951: 99.6686% ( 5) 00:13:08.114 3.951 - 3.979: 99.6809% ( 2) 00:13:08.114 4.007 - 4.035: 99.6870% ( 1) 00:13:08.114 4.035 - 4.063: 99.6932% ( 1) 00:13:08.114 4.063 - 4.090: 99.6993% ( 1) 00:13:08.114 5.120 - 5.148: 99.7054% ( 1) 00:13:08.114 5.176 - 5.203: 99.7116% ( 1) 00:13:08.114 5.315 - 5.343: 99.7177% ( 1) 00:13:08.114 5.454 - 5.482: 99.7238% ( 1) 00:13:08.114 5.593 - 5.621: 99.7300% ( 1) 00:13:08.114 5.732 - 5.760: 99.7361% ( 1) 00:13:08.114 6.066 - 6.094: 99.7423% ( 1) 00:13:08.114 6.122 - 6.150: 99.7484% ( 1) 00:13:08.114 6.261 - 6.289: 99.7545% ( 1) 00:13:08.114 6.289 - 6.317: 99.7668% ( 2) 00:13:08.114 6.317 - 6.344: 99.7729% ( 1) 00:13:08.114 6.483 - 6.511: 99.7791% ( 1) 00:13:08.114 6.511 - 6.539: 99.7913% ( 2) 00:13:08.114 6.539 - 6.567: 99.8036% ( 2) 00:13:08.114 6.678 - 6.706: 99.8098% ( 1) 00:13:08.114 6.817 - 6.845: 99.8159% ( 1) 00:13:08.114 6.901 - 6.929: 99.8220% ( 1) 00:13:08.114 7.040 - 7.068: 99.8282% ( 1) 00:13:08.114 7.068 - 7.096: 99.8404% ( 2) 00:13:08.114 7.096 - 7.123: 99.8466% ( 1) 00:13:08.114 7.235 - 7.290: 99.8650% ( 3) 00:13:08.114 7.290 - 7.346: 99.8773% ( 2) 00:13:08.114 7.346 - 7.402: 99.8834% ( 1) 00:13:08.114 7.457 - 7.513: 99.8895% ( 1) 00:13:08.114 7.513 - 7.569: 99.8957% ( 1) 00:13:08.114 7.847 - 7.903: 99.9018% ( 1) 00:13:08.114 3989.148 - 4017.642: 100.0000% ( 16) 00:13:08.114 00:13:08.114 Complete histogram 00:13:08.114 ================== 00:13:08.114 Range in us Cumulative Count 00:13:08.114 1.760 - 1.767: 0.0675% ( 11) 00:13:08.114 1.767 - 1.774: 0.3130% ( 40) 00:13:08.114 1.774 - 1.781: 0.5891% ( 45) 00:13:08.114 1.781 - 1.795: 0.7978% ( 34) 00:13:08.114 1.795 - 1.809: 0.8407% ( 7) 00:13:08.114 1.809 - 1.823: 9.8374% ( 1466) 00:13:08.114 1.823 - 1.837: 50.9359% ( 6697) 00:13:08.114 1.837 - 1.850: 64.5229% ( 2214) 00:13:08.114 1.850 - [2024-07-15 11:23:51.629983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.114 1.864: 66.9653% ( 398) 00:13:08.114 1.864 - 1.878: 69.2789% ( 377) 00:13:08.114 1.878 - 1.892: 82.8045% ( 2204) 00:13:08.114 1.892 - 1.906: 94.2743% ( 1869) 00:13:08.114 1.906 - 1.920: 97.2261% ( 481) 00:13:08.114 1.920 - 1.934: 98.0423% ( 133) 00:13:08.114 1.934 - 1.948: 98.2817% ( 39) 00:13:08.114 1.948 - 1.962: 98.7113% ( 70) 00:13:08.114 1.962 - 1.976: 99.0856% ( 61) 00:13:08.114 1.976 - 1.990: 99.2759% ( 31) 00:13:08.114 1.990 - 2.003: 99.3372% ( 10) 00:13:08.114 2.003 - 2.017: 99.3556% ( 3) 00:13:08.114 2.017 - 2.031: 99.3740% ( 3) 00:13:08.114 2.129 - 2.143: 99.3802% ( 1) 00:13:08.114 2.143 - 2.157: 99.3925% ( 2) 00:13:08.115 2.170 - 2.184: 99.3986% ( 1) 00:13:08.115 2.240 - 2.254: 99.4047% ( 1) 00:13:08.115 2.268 - 2.282: 99.4170% ( 2) 00:13:08.115 2.379 - 2.393: 99.4231% ( 1) 00:13:08.115 3.951 - 3.979: 99.4293% ( 1) 00:13:08.115 3.979 - 4.007: 99.4354% ( 1) 00:13:08.115 4.007 - 4.035: 99.4415% ( 1) 00:13:08.115 4.118 - 4.146: 99.4477% ( 1) 00:13:08.115 4.452 - 4.480: 99.4538% ( 1) 00:13:08.115 4.563 - 4.591: 99.4600% ( 1) 00:13:08.115 4.703 - 4.730: 99.4661% ( 1) 00:13:08.115 4.897 - 4.925: 99.4722% ( 1) 00:13:08.115 4.981 - 5.009: 99.4784% ( 1) 00:13:08.115 5.510 - 5.537: 99.4968% ( 3) 00:13:08.115 5.788 - 5.816: 99.5029% ( 1) 00:13:08.115 5.843 - 5.871: 99.5091% ( 1) 00:13:08.115 6.010 - 6.038: 99.5152% ( 1) 00:13:08.115 6.094 - 6.122: 99.5213% ( 1) 00:13:08.115 6.122 - 6.150: 99.5275% ( 1) 00:13:08.115 17.697 - 17.809: 99.5336% ( 1) 00:13:08.115 3989.148 - 4017.642: 99.9939% ( 75) 00:13:08.115 4160.111 - 4188.605: 100.0000% ( 1) 00:13:08.115 00:13:08.115 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:08.115 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:08.115 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:08.115 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:08.115 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.373 [ 00:13:08.373 { 00:13:08.373 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.373 "subtype": "Discovery", 00:13:08.373 "listen_addresses": [], 00:13:08.373 "allow_any_host": true, 00:13:08.373 "hosts": [] 00:13:08.373 }, 00:13:08.373 { 00:13:08.373 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.373 "subtype": "NVMe", 00:13:08.373 "listen_addresses": [ 00:13:08.373 { 00:13:08.373 "trtype": "VFIOUSER", 00:13:08.373 "adrfam": "IPv4", 00:13:08.373 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.373 "trsvcid": "0" 00:13:08.373 } 00:13:08.373 ], 00:13:08.373 "allow_any_host": true, 00:13:08.373 "hosts": [], 00:13:08.373 "serial_number": "SPDK1", 00:13:08.373 "model_number": "SPDK bdev Controller", 00:13:08.373 "max_namespaces": 32, 00:13:08.373 "min_cntlid": 1, 00:13:08.373 "max_cntlid": 65519, 00:13:08.373 "namespaces": [ 00:13:08.373 { 00:13:08.373 "nsid": 1, 00:13:08.373 "bdev_name": "Malloc1", 00:13:08.373 "name": "Malloc1", 00:13:08.373 "nguid": "9C6B8CA9DD3E437B8C867F8ADA036AA7", 00:13:08.373 "uuid": "9c6b8ca9-dd3e-437b-8c86-7f8ada036aa7" 00:13:08.373 } 00:13:08.373 ] 00:13:08.373 }, 00:13:08.373 { 00:13:08.373 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.373 "subtype": "NVMe", 00:13:08.373 "listen_addresses": [ 00:13:08.373 { 00:13:08.373 "trtype": "VFIOUSER", 00:13:08.373 "adrfam": "IPv4", 00:13:08.373 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.373 "trsvcid": "0" 00:13:08.373 } 00:13:08.373 ], 00:13:08.373 "allow_any_host": true, 00:13:08.373 "hosts": [], 00:13:08.373 "serial_number": "SPDK2", 00:13:08.373 "model_number": "SPDK bdev Controller", 00:13:08.373 "max_namespaces": 32, 00:13:08.373 "min_cntlid": 1, 00:13:08.373 "max_cntlid": 65519, 00:13:08.373 "namespaces": [ 00:13:08.373 { 00:13:08.373 "nsid": 1, 00:13:08.373 "bdev_name": "Malloc2", 00:13:08.373 "name": "Malloc2", 00:13:08.374 "nguid": "110BD4E391954A47871123B796C2ABBB", 00:13:08.374 "uuid": "110bd4e3-9195-4a47-8711-23b796c2abbb" 00:13:08.374 } 00:13:08.374 ] 00:13:08.374 } 00:13:08.374 ] 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=530240 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:08.374 11:23:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:08.374 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.632 [2024-07-15 11:23:51.985633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.632 Malloc3 00:13:08.632 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:08.632 [2024-07-15 11:23:52.211360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.891 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.891 Asynchronous Event Request test 00:13:08.891 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.891 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.891 Registering asynchronous event callbacks... 00:13:08.891 Starting namespace attribute notice tests for all controllers... 00:13:08.891 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:08.891 aer_cb - Changed Namespace 00:13:08.891 Cleaning up... 00:13:08.891 [ 00:13:08.891 { 00:13:08.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.891 "subtype": "Discovery", 00:13:08.891 "listen_addresses": [], 00:13:08.891 "allow_any_host": true, 00:13:08.891 "hosts": [] 00:13:08.891 }, 00:13:08.891 { 00:13:08.891 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.891 "subtype": "NVMe", 00:13:08.891 "listen_addresses": [ 00:13:08.891 { 00:13:08.891 "trtype": "VFIOUSER", 00:13:08.891 "adrfam": "IPv4", 00:13:08.891 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.891 "trsvcid": "0" 00:13:08.891 } 00:13:08.891 ], 00:13:08.891 "allow_any_host": true, 00:13:08.891 "hosts": [], 00:13:08.891 "serial_number": "SPDK1", 00:13:08.891 "model_number": "SPDK bdev Controller", 00:13:08.891 "max_namespaces": 32, 00:13:08.891 "min_cntlid": 1, 00:13:08.891 "max_cntlid": 65519, 00:13:08.891 "namespaces": [ 00:13:08.891 { 00:13:08.891 "nsid": 1, 00:13:08.891 "bdev_name": "Malloc1", 00:13:08.891 "name": "Malloc1", 00:13:08.891 "nguid": "9C6B8CA9DD3E437B8C867F8ADA036AA7", 00:13:08.891 "uuid": "9c6b8ca9-dd3e-437b-8c86-7f8ada036aa7" 00:13:08.891 }, 00:13:08.891 { 00:13:08.891 "nsid": 2, 00:13:08.892 "bdev_name": "Malloc3", 00:13:08.892 "name": "Malloc3", 00:13:08.892 "nguid": "28EE5343BAD44B31A464CF54AEC701D6", 00:13:08.892 "uuid": "28ee5343-bad4-4b31-a464-cf54aec701d6" 00:13:08.892 } 00:13:08.892 ] 00:13:08.892 }, 00:13:08.892 { 00:13:08.892 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.892 "subtype": "NVMe", 00:13:08.892 "listen_addresses": [ 00:13:08.892 { 00:13:08.892 "trtype": "VFIOUSER", 00:13:08.892 "adrfam": "IPv4", 00:13:08.892 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.892 "trsvcid": "0" 00:13:08.892 } 00:13:08.892 ], 00:13:08.892 "allow_any_host": true, 00:13:08.892 "hosts": [], 00:13:08.892 "serial_number": "SPDK2", 00:13:08.892 "model_number": "SPDK bdev Controller", 00:13:08.892 "max_namespaces": 32, 00:13:08.892 "min_cntlid": 1, 00:13:08.892 "max_cntlid": 65519, 00:13:08.892 "namespaces": [ 00:13:08.892 { 00:13:08.892 "nsid": 1, 00:13:08.892 "bdev_name": "Malloc2", 00:13:08.892 "name": "Malloc2", 00:13:08.892 "nguid": "110BD4E391954A47871123B796C2ABBB", 00:13:08.892 "uuid": "110bd4e3-9195-4a47-8711-23b796c2abbb" 00:13:08.892 } 00:13:08.892 ] 00:13:08.892 } 00:13:08.892 ] 00:13:08.892 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 530240 00:13:08.892 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:08.892 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:08.892 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:08.892 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:08.892 [2024-07-15 11:23:52.439061] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:08.892 [2024-07-15 11:23:52.439103] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530256 ] 00:13:08.892 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.892 [2024-07-15 11:23:52.467611] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:08.892 [2024-07-15 11:23:52.477467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:08.892 [2024-07-15 11:23:52.477488] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3ce8038000 00:13:08.892 [2024-07-15 11:23:52.478465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.892 [2024-07-15 11:23:52.479474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.892 [2024-07-15 11:23:52.480478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:08.892 [2024-07-15 11:23:52.481485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:08.892 [2024-07-15 11:23:52.482496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:09.152 [2024-07-15 11:23:52.483502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.152 [2024-07-15 11:23:52.484511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:09.152 [2024-07-15 11:23:52.485520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.152 [2024-07-15 11:23:52.486532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:09.152 [2024-07-15 11:23:52.486541] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3ce802d000 00:13:09.152 [2024-07-15 11:23:52.487479] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:09.152 [2024-07-15 11:23:52.500005] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:09.153 [2024-07-15 11:23:52.500026] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:09.153 [2024-07-15 11:23:52.505110] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:09.153 [2024-07-15 11:23:52.505147] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:09.153 [2024-07-15 11:23:52.505212] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:09.153 [2024-07-15 11:23:52.505230] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:09.153 [2024-07-15 11:23:52.505235] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:09.153 [2024-07-15 11:23:52.506113] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:09.153 [2024-07-15 11:23:52.506122] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:09.153 [2024-07-15 11:23:52.506128] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:09.153 [2024-07-15 11:23:52.507115] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:09.153 [2024-07-15 11:23:52.507124] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:09.153 [2024-07-15 11:23:52.507130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.508121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:09.153 [2024-07-15 11:23:52.508130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.509129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:09.153 [2024-07-15 11:23:52.509137] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:09.153 [2024-07-15 11:23:52.509144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.509149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.509254] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:09.153 [2024-07-15 11:23:52.509259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.509263] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:09.153 [2024-07-15 11:23:52.510139] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:09.153 [2024-07-15 11:23:52.511140] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:09.153 [2024-07-15 11:23:52.512145] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:09.153 [2024-07-15 11:23:52.513154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.153 [2024-07-15 11:23:52.513192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:09.153 [2024-07-15 11:23:52.514163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:09.153 [2024-07-15 11:23:52.514171] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:09.153 [2024-07-15 11:23:52.514176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.514192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:09.153 [2024-07-15 11:23:52.514202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.514213] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.153 [2024-07-15 11:23:52.514218] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.153 [2024-07-15 11:23:52.514232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.518231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.518242] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:09.153 [2024-07-15 11:23:52.518248] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:09.153 [2024-07-15 11:23:52.518252] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:09.153 [2024-07-15 11:23:52.518256] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:09.153 [2024-07-15 11:23:52.518260] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:09.153 [2024-07-15 11:23:52.518264] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:09.153 [2024-07-15 11:23:52.518270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.518277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.518287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.526231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.526245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.153 [2024-07-15 11:23:52.526252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.153 [2024-07-15 11:23:52.526259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.153 [2024-07-15 11:23:52.526267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.153 [2024-07-15 11:23:52.526271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.526278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.526286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.534231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.534238] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:09.153 [2024-07-15 11:23:52.534242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.534248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.534253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.534261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.542229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.542280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.542287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.542294] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:09.153 [2024-07-15 11:23:52.542299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:09.153 [2024-07-15 11:23:52.542305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.550230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.550241] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:09.153 [2024-07-15 11:23:52.550252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.550258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.550265] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.153 [2024-07-15 11:23:52.550269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.153 [2024-07-15 11:23:52.550275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.558229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.558245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.558252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.558259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.153 [2024-07-15 11:23:52.558262] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.153 [2024-07-15 11:23:52.558268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.566232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.566242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566275] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:09.153 [2024-07-15 11:23:52.566279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:09.153 [2024-07-15 11:23:52.566283] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:09.153 [2024-07-15 11:23:52.566298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.574231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.574246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.582230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.582247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.590231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.590246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.598232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.598249] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:09.153 [2024-07-15 11:23:52.598253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:09.153 [2024-07-15 11:23:52.598256] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:09.153 [2024-07-15 11:23:52.598259] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:09.153 [2024-07-15 11:23:52.598265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:09.153 [2024-07-15 11:23:52.598272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:09.153 [2024-07-15 11:23:52.598275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:09.153 [2024-07-15 11:23:52.598281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.598288] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:09.153 [2024-07-15 11:23:52.598295] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.153 [2024-07-15 11:23:52.598301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.598309] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:09.153 [2024-07-15 11:23:52.598313] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:09.153 [2024-07-15 11:23:52.598319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:09.153 [2024-07-15 11:23:52.606232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.606246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.606256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:09.153 [2024-07-15 11:23:52.606262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:09.153 ===================================================== 00:13:09.153 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:09.153 ===================================================== 00:13:09.153 Controller Capabilities/Features 00:13:09.153 ================================ 00:13:09.153 Vendor ID: 4e58 00:13:09.153 Subsystem Vendor ID: 4e58 00:13:09.153 Serial Number: SPDK2 00:13:09.153 Model Number: SPDK bdev Controller 00:13:09.153 Firmware Version: 24.09 00:13:09.153 Recommended Arb Burst: 6 00:13:09.153 IEEE OUI Identifier: 8d 6b 50 00:13:09.153 Multi-path I/O 00:13:09.153 May have multiple subsystem ports: Yes 00:13:09.153 May have multiple controllers: Yes 00:13:09.153 Associated with SR-IOV VF: No 00:13:09.153 Max Data Transfer Size: 131072 00:13:09.153 Max Number of Namespaces: 32 00:13:09.153 Max Number of I/O Queues: 127 00:13:09.153 NVMe Specification Version (VS): 1.3 00:13:09.153 NVMe Specification Version (Identify): 1.3 00:13:09.153 Maximum Queue Entries: 256 00:13:09.153 Contiguous Queues Required: Yes 00:13:09.153 Arbitration Mechanisms Supported 00:13:09.153 Weighted Round Robin: Not Supported 00:13:09.153 Vendor Specific: Not Supported 00:13:09.153 Reset Timeout: 15000 ms 00:13:09.153 Doorbell Stride: 4 bytes 00:13:09.153 NVM Subsystem Reset: Not Supported 00:13:09.153 Command Sets Supported 00:13:09.153 NVM Command Set: Supported 00:13:09.153 Boot Partition: Not Supported 00:13:09.153 Memory Page Size Minimum: 4096 bytes 00:13:09.153 Memory Page Size Maximum: 4096 bytes 00:13:09.153 Persistent Memory Region: Not Supported 00:13:09.153 Optional Asynchronous Events Supported 00:13:09.153 Namespace Attribute Notices: Supported 00:13:09.153 Firmware Activation Notices: Not Supported 00:13:09.153 ANA Change Notices: Not Supported 00:13:09.154 PLE Aggregate Log Change Notices: Not Supported 00:13:09.154 LBA Status Info Alert Notices: Not Supported 00:13:09.154 EGE Aggregate Log Change Notices: Not Supported 00:13:09.154 Normal NVM Subsystem Shutdown event: Not Supported 00:13:09.154 Zone Descriptor Change Notices: Not Supported 00:13:09.154 Discovery Log Change Notices: Not Supported 00:13:09.154 Controller Attributes 00:13:09.154 128-bit Host Identifier: Supported 00:13:09.154 Non-Operational Permissive Mode: Not Supported 00:13:09.154 NVM Sets: Not Supported 00:13:09.154 Read Recovery Levels: Not Supported 00:13:09.154 Endurance Groups: Not Supported 00:13:09.154 Predictable Latency Mode: Not Supported 00:13:09.154 Traffic Based Keep ALive: Not Supported 00:13:09.154 Namespace Granularity: Not Supported 00:13:09.154 SQ Associations: Not Supported 00:13:09.154 UUID List: Not Supported 00:13:09.154 Multi-Domain Subsystem: Not Supported 00:13:09.154 Fixed Capacity Management: Not Supported 00:13:09.154 Variable Capacity Management: Not Supported 00:13:09.154 Delete Endurance Group: Not Supported 00:13:09.154 Delete NVM Set: Not Supported 00:13:09.154 Extended LBA Formats Supported: Not Supported 00:13:09.154 Flexible Data Placement Supported: Not Supported 00:13:09.154 00:13:09.154 Controller Memory Buffer Support 00:13:09.154 ================================ 00:13:09.154 Supported: No 00:13:09.154 00:13:09.154 Persistent Memory Region Support 00:13:09.154 ================================ 00:13:09.154 Supported: No 00:13:09.154 00:13:09.154 Admin Command Set Attributes 00:13:09.154 ============================ 00:13:09.154 Security Send/Receive: Not Supported 00:13:09.154 Format NVM: Not Supported 00:13:09.154 Firmware Activate/Download: Not Supported 00:13:09.154 Namespace Management: Not Supported 00:13:09.154 Device Self-Test: Not Supported 00:13:09.154 Directives: Not Supported 00:13:09.154 NVMe-MI: Not Supported 00:13:09.154 Virtualization Management: Not Supported 00:13:09.154 Doorbell Buffer Config: Not Supported 00:13:09.154 Get LBA Status Capability: Not Supported 00:13:09.154 Command & Feature Lockdown Capability: Not Supported 00:13:09.154 Abort Command Limit: 4 00:13:09.154 Async Event Request Limit: 4 00:13:09.154 Number of Firmware Slots: N/A 00:13:09.154 Firmware Slot 1 Read-Only: N/A 00:13:09.154 Firmware Activation Without Reset: N/A 00:13:09.154 Multiple Update Detection Support: N/A 00:13:09.154 Firmware Update Granularity: No Information Provided 00:13:09.154 Per-Namespace SMART Log: No 00:13:09.154 Asymmetric Namespace Access Log Page: Not Supported 00:13:09.154 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:09.154 Command Effects Log Page: Supported 00:13:09.154 Get Log Page Extended Data: Supported 00:13:09.154 Telemetry Log Pages: Not Supported 00:13:09.154 Persistent Event Log Pages: Not Supported 00:13:09.154 Supported Log Pages Log Page: May Support 00:13:09.154 Commands Supported & Effects Log Page: Not Supported 00:13:09.154 Feature Identifiers & Effects Log Page:May Support 00:13:09.154 NVMe-MI Commands & Effects Log Page: May Support 00:13:09.154 Data Area 4 for Telemetry Log: Not Supported 00:13:09.154 Error Log Page Entries Supported: 128 00:13:09.154 Keep Alive: Supported 00:13:09.154 Keep Alive Granularity: 10000 ms 00:13:09.154 00:13:09.154 NVM Command Set Attributes 00:13:09.154 ========================== 00:13:09.154 Submission Queue Entry Size 00:13:09.154 Max: 64 00:13:09.154 Min: 64 00:13:09.154 Completion Queue Entry Size 00:13:09.154 Max: 16 00:13:09.154 Min: 16 00:13:09.154 Number of Namespaces: 32 00:13:09.154 Compare Command: Supported 00:13:09.154 Write Uncorrectable Command: Not Supported 00:13:09.154 Dataset Management Command: Supported 00:13:09.154 Write Zeroes Command: Supported 00:13:09.154 Set Features Save Field: Not Supported 00:13:09.154 Reservations: Not Supported 00:13:09.154 Timestamp: Not Supported 00:13:09.154 Copy: Supported 00:13:09.154 Volatile Write Cache: Present 00:13:09.154 Atomic Write Unit (Normal): 1 00:13:09.154 Atomic Write Unit (PFail): 1 00:13:09.154 Atomic Compare & Write Unit: 1 00:13:09.154 Fused Compare & Write: Supported 00:13:09.154 Scatter-Gather List 00:13:09.154 SGL Command Set: Supported (Dword aligned) 00:13:09.154 SGL Keyed: Not Supported 00:13:09.154 SGL Bit Bucket Descriptor: Not Supported 00:13:09.154 SGL Metadata Pointer: Not Supported 00:13:09.154 Oversized SGL: Not Supported 00:13:09.154 SGL Metadata Address: Not Supported 00:13:09.154 SGL Offset: Not Supported 00:13:09.154 Transport SGL Data Block: Not Supported 00:13:09.154 Replay Protected Memory Block: Not Supported 00:13:09.154 00:13:09.154 Firmware Slot Information 00:13:09.154 ========================= 00:13:09.154 Active slot: 1 00:13:09.154 Slot 1 Firmware Revision: 24.09 00:13:09.154 00:13:09.154 00:13:09.154 Commands Supported and Effects 00:13:09.154 ============================== 00:13:09.154 Admin Commands 00:13:09.154 -------------- 00:13:09.154 Get Log Page (02h): Supported 00:13:09.154 Identify (06h): Supported 00:13:09.154 Abort (08h): Supported 00:13:09.154 Set Features (09h): Supported 00:13:09.154 Get Features (0Ah): Supported 00:13:09.154 Asynchronous Event Request (0Ch): Supported 00:13:09.154 Keep Alive (18h): Supported 00:13:09.154 I/O Commands 00:13:09.154 ------------ 00:13:09.154 Flush (00h): Supported LBA-Change 00:13:09.154 Write (01h): Supported LBA-Change 00:13:09.154 Read (02h): Supported 00:13:09.154 Compare (05h): Supported 00:13:09.154 Write Zeroes (08h): Supported LBA-Change 00:13:09.154 Dataset Management (09h): Supported LBA-Change 00:13:09.154 Copy (19h): Supported LBA-Change 00:13:09.154 00:13:09.154 Error Log 00:13:09.154 ========= 00:13:09.154 00:13:09.154 Arbitration 00:13:09.154 =========== 00:13:09.154 Arbitration Burst: 1 00:13:09.154 00:13:09.154 Power Management 00:13:09.154 ================ 00:13:09.154 Number of Power States: 1 00:13:09.154 Current Power State: Power State #0 00:13:09.154 Power State #0: 00:13:09.154 Max Power: 0.00 W 00:13:09.154 Non-Operational State: Operational 00:13:09.154 Entry Latency: Not Reported 00:13:09.154 Exit Latency: Not Reported 00:13:09.154 Relative Read Throughput: 0 00:13:09.154 Relative Read Latency: 0 00:13:09.154 Relative Write Throughput: 0 00:13:09.154 Relative Write Latency: 0 00:13:09.154 Idle Power: Not Reported 00:13:09.154 Active Power: Not Reported 00:13:09.154 Non-Operational Permissive Mode: Not Supported 00:13:09.154 00:13:09.154 Health Information 00:13:09.154 ================== 00:13:09.154 Critical Warnings: 00:13:09.154 Available Spare Space: OK 00:13:09.154 Temperature: OK 00:13:09.154 Device Reliability: OK 00:13:09.154 Read Only: No 00:13:09.154 Volatile Memory Backup: OK 00:13:09.154 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:09.154 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:09.154 Available Spare: 0% 00:13:09.154 Available Sp[2024-07-15 11:23:52.606349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:09.154 [2024-07-15 11:23:52.614232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:09.154 [2024-07-15 11:23:52.614265] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:09.154 [2024-07-15 11:23:52.614273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.154 [2024-07-15 11:23:52.614279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.154 [2024-07-15 11:23:52.614284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.154 [2024-07-15 11:23:52.614291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.154 [2024-07-15 11:23:52.614330] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:09.154 [2024-07-15 11:23:52.614340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:09.154 [2024-07-15 11:23:52.615336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.154 [2024-07-15 11:23:52.615381] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:09.154 [2024-07-15 11:23:52.615387] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:09.154 [2024-07-15 11:23:52.616340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:09.154 [2024-07-15 11:23:52.616352] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:09.154 [2024-07-15 11:23:52.616396] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:09.154 [2024-07-15 11:23:52.619231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:09.154 are Threshold: 0% 00:13:09.154 Life Percentage Used: 0% 00:13:09.154 Data Units Read: 0 00:13:09.154 Data Units Written: 0 00:13:09.154 Host Read Commands: 0 00:13:09.154 Host Write Commands: 0 00:13:09.154 Controller Busy Time: 0 minutes 00:13:09.154 Power Cycles: 0 00:13:09.154 Power On Hours: 0 hours 00:13:09.154 Unsafe Shutdowns: 0 00:13:09.154 Unrecoverable Media Errors: 0 00:13:09.154 Lifetime Error Log Entries: 0 00:13:09.154 Warning Temperature Time: 0 minutes 00:13:09.154 Critical Temperature Time: 0 minutes 00:13:09.154 00:13:09.154 Number of Queues 00:13:09.154 ================ 00:13:09.154 Number of I/O Submission Queues: 127 00:13:09.154 Number of I/O Completion Queues: 127 00:13:09.154 00:13:09.154 Active Namespaces 00:13:09.154 ================= 00:13:09.154 Namespace ID:1 00:13:09.154 Error Recovery Timeout: Unlimited 00:13:09.154 Command Set Identifier: NVM (00h) 00:13:09.154 Deallocate: Supported 00:13:09.154 Deallocated/Unwritten Error: Not Supported 00:13:09.154 Deallocated Read Value: Unknown 00:13:09.154 Deallocate in Write Zeroes: Not Supported 00:13:09.154 Deallocated Guard Field: 0xFFFF 00:13:09.154 Flush: Supported 00:13:09.154 Reservation: Supported 00:13:09.154 Namespace Sharing Capabilities: Multiple Controllers 00:13:09.154 Size (in LBAs): 131072 (0GiB) 00:13:09.154 Capacity (in LBAs): 131072 (0GiB) 00:13:09.154 Utilization (in LBAs): 131072 (0GiB) 00:13:09.154 NGUID: 110BD4E391954A47871123B796C2ABBB 00:13:09.154 UUID: 110bd4e3-9195-4a47-8711-23b796c2abbb 00:13:09.154 Thin Provisioning: Not Supported 00:13:09.154 Per-NS Atomic Units: Yes 00:13:09.154 Atomic Boundary Size (Normal): 0 00:13:09.154 Atomic Boundary Size (PFail): 0 00:13:09.154 Atomic Boundary Offset: 0 00:13:09.154 Maximum Single Source Range Length: 65535 00:13:09.154 Maximum Copy Length: 65535 00:13:09.154 Maximum Source Range Count: 1 00:13:09.154 NGUID/EUI64 Never Reused: No 00:13:09.154 Namespace Write Protected: No 00:13:09.154 Number of LBA Formats: 1 00:13:09.154 Current LBA Format: LBA Format #00 00:13:09.154 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:09.154 00:13:09.154 11:23:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:09.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.413 [2024-07-15 11:23:52.821568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.686 Initializing NVMe Controllers 00:13:14.686 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:14.686 Initialization complete. Launching workers. 00:13:14.686 ======================================================== 00:13:14.686 Latency(us) 00:13:14.686 Device Information : IOPS MiB/s Average min max 00:13:14.686 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39886.90 155.81 3208.83 960.42 6650.74 00:13:14.686 ======================================================== 00:13:14.686 Total : 39886.90 155.81 3208.83 960.42 6650.74 00:13:14.686 00:13:14.686 [2024-07-15 11:23:57.928464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.686 11:23:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:14.686 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.686 [2024-07-15 11:23:58.148116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.960 Initializing NVMe Controllers 00:13:19.960 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.960 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:19.960 Initialization complete. Launching workers. 00:13:19.960 ======================================================== 00:13:19.960 Latency(us) 00:13:19.960 Device Information : IOPS MiB/s Average min max 00:13:19.960 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.20 156.10 3203.05 996.17 7395.45 00:13:19.960 ======================================================== 00:13:19.960 Total : 39962.20 156.10 3203.05 996.17 7395.45 00:13:19.960 00:13:19.960 [2024-07-15 11:24:03.169084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.960 11:24:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:19.960 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.960 [2024-07-15 11:24:03.372780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.234 [2024-07-15 11:24:08.509337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.234 Initializing NVMe Controllers 00:13:25.234 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:25.234 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:25.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:25.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:25.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:25.234 Initialization complete. Launching workers. 00:13:25.234 Starting thread on core 2 00:13:25.234 Starting thread on core 3 00:13:25.234 Starting thread on core 1 00:13:25.234 11:24:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:25.234 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.234 [2024-07-15 11:24:08.786645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.579 [2024-07-15 11:24:11.850466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.579 Initializing NVMe Controllers 00:13:28.579 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:28.579 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:28.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:28.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:28.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:28.579 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:28.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:28.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:28.579 Initialization complete. Launching workers. 00:13:28.579 Starting thread on core 1 with urgent priority queue 00:13:28.579 Starting thread on core 2 with urgent priority queue 00:13:28.579 Starting thread on core 3 with urgent priority queue 00:13:28.579 Starting thread on core 0 with urgent priority queue 00:13:28.579 SPDK bdev Controller (SPDK2 ) core 0: 9294.33 IO/s 10.76 secs/100000 ios 00:13:28.579 SPDK bdev Controller (SPDK2 ) core 1: 7194.67 IO/s 13.90 secs/100000 ios 00:13:28.579 SPDK bdev Controller (SPDK2 ) core 2: 8067.33 IO/s 12.40 secs/100000 ios 00:13:28.579 SPDK bdev Controller (SPDK2 ) core 3: 7377.33 IO/s 13.56 secs/100000 ios 00:13:28.579 ======================================================== 00:13:28.579 00:13:28.579 11:24:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:28.579 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.579 [2024-07-15 11:24:12.133645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.579 Initializing NVMe Controllers 00:13:28.579 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:28.579 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:28.579 Namespace ID: 1 size: 0GB 00:13:28.579 Initialization complete. 00:13:28.579 INFO: using host memory buffer for IO 00:13:28.579 Hello world! 00:13:28.579 [2024-07-15 11:24:12.142716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.838 11:24:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:28.838 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.838 [2024-07-15 11:24:12.415133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.213 Initializing NVMe Controllers 00:13:30.213 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.213 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.213 Initialization complete. Launching workers. 00:13:30.213 submit (in ns) avg, min, max = 6219.5, 3286.1, 4001592.2 00:13:30.213 complete (in ns) avg, min, max = 22640.6, 1820.0, 4036517.4 00:13:30.213 00:13:30.213 Submit histogram 00:13:30.213 ================ 00:13:30.213 Range in us Cumulative Count 00:13:30.213 3.283 - 3.297: 0.0556% ( 9) 00:13:30.213 3.297 - 3.311: 0.4080% ( 57) 00:13:30.213 3.311 - 3.325: 1.3166% ( 147) 00:13:30.213 3.325 - 3.339: 2.5590% ( 201) 00:13:30.213 3.339 - 3.353: 4.3763% ( 294) 00:13:30.213 3.353 - 3.367: 7.5906% ( 520) 00:13:30.213 3.367 - 3.381: 12.6097% ( 812) 00:13:30.213 3.381 - 3.395: 18.4819% ( 950) 00:13:30.213 3.395 - 3.409: 24.3788% ( 954) 00:13:30.213 3.409 - 3.423: 30.4735% ( 986) 00:13:30.213 3.423 - 3.437: 35.5854% ( 827) 00:13:30.213 3.437 - 3.450: 40.2893% ( 761) 00:13:30.213 3.450 - 3.464: 45.9080% ( 909) 00:13:30.213 3.464 - 3.478: 51.1497% ( 848) 00:13:30.213 3.478 - 3.492: 55.5446% ( 711) 00:13:30.213 3.492 - 3.506: 60.1681% ( 748) 00:13:30.213 3.506 - 3.520: 66.3617% ( 1002) 00:13:30.213 3.520 - 3.534: 71.6096% ( 849) 00:13:30.213 3.534 - 3.548: 75.8314% ( 683) 00:13:30.213 3.548 - 3.562: 79.7441% ( 633) 00:13:30.213 3.562 - 3.590: 85.6410% ( 954) 00:13:30.213 3.590 - 3.617: 87.7364% ( 339) 00:13:30.213 3.617 - 3.645: 88.6513% ( 148) 00:13:30.213 3.645 - 3.673: 89.9493% ( 210) 00:13:30.213 3.673 - 3.701: 91.5873% ( 265) 00:13:30.213 3.701 - 3.729: 93.2995% ( 277) 00:13:30.213 3.729 - 3.757: 94.8696% ( 254) 00:13:30.213 3.757 - 3.784: 96.4891% ( 262) 00:13:30.213 3.784 - 3.812: 97.6202% ( 183) 00:13:30.213 3.812 - 3.840: 98.4547% ( 135) 00:13:30.213 3.840 - 3.868: 98.9863% ( 86) 00:13:30.213 3.868 - 3.896: 99.2521% ( 43) 00:13:30.213 3.896 - 3.923: 99.4190% ( 27) 00:13:30.213 3.923 - 3.951: 99.5117% ( 15) 00:13:30.213 3.951 - 3.979: 99.5240% ( 2) 00:13:30.213 3.979 - 4.007: 99.5364% ( 2) 00:13:30.213 4.146 - 4.174: 99.5426% ( 1) 00:13:30.213 4.202 - 4.230: 99.5488% ( 1) 00:13:30.213 4.953 - 4.981: 99.5550% ( 1) 00:13:30.213 5.037 - 5.064: 99.5611% ( 1) 00:13:30.213 5.064 - 5.092: 99.5673% ( 1) 00:13:30.213 5.092 - 5.120: 99.5735% ( 1) 00:13:30.213 5.120 - 5.148: 99.5797% ( 1) 00:13:30.213 5.176 - 5.203: 99.5859% ( 1) 00:13:30.213 5.203 - 5.231: 99.5982% ( 2) 00:13:30.213 5.231 - 5.259: 99.6106% ( 2) 00:13:30.213 5.287 - 5.315: 99.6229% ( 2) 00:13:30.213 5.315 - 5.343: 99.6353% ( 2) 00:13:30.213 5.398 - 5.426: 99.6415% ( 1) 00:13:30.213 5.454 - 5.482: 99.6477% ( 1) 00:13:30.213 5.510 - 5.537: 99.6539% ( 1) 00:13:30.213 5.565 - 5.593: 99.6662% ( 2) 00:13:30.213 5.593 - 5.621: 99.6724% ( 1) 00:13:30.213 5.621 - 5.649: 99.6786% ( 1) 00:13:30.213 5.649 - 5.677: 99.6848% ( 1) 00:13:30.213 5.677 - 5.704: 99.6971% ( 2) 00:13:30.213 5.704 - 5.732: 99.7033% ( 1) 00:13:30.213 5.843 - 5.871: 99.7095% ( 1) 00:13:30.213 5.871 - 5.899: 99.7157% ( 1) 00:13:30.213 5.927 - 5.955: 99.7218% ( 1) 00:13:30.213 5.955 - 5.983: 99.7280% ( 1) 00:13:30.213 5.983 - 6.010: 99.7466% ( 3) 00:13:30.213 6.066 - 6.094: 99.7528% ( 1) 00:13:30.213 6.177 - 6.205: 99.7775% ( 4) 00:13:30.213 6.205 - 6.233: 99.7898% ( 2) 00:13:30.213 6.233 - 6.261: 99.8022% ( 2) 00:13:30.213 6.261 - 6.289: 99.8084% ( 1) 00:13:30.213 6.317 - 6.344: 99.8207% ( 2) 00:13:30.213 6.428 - 6.456: 99.8393% ( 3) 00:13:30.213 6.539 - 6.567: 99.8455% ( 1) 00:13:30.213 6.567 - 6.595: 99.8578% ( 2) 00:13:30.213 6.595 - 6.623: 99.8702% ( 2) 00:13:30.213 6.790 - 6.817: 99.8826% ( 2) 00:13:30.213 6.901 - 6.929: 99.8887% ( 1) 00:13:30.213 6.929 - 6.957: 99.8949% ( 1) 00:13:30.213 7.123 - 7.179: 99.9135% ( 3) 00:13:30.213 7.179 - 7.235: 99.9196% ( 1) 00:13:30.213 7.346 - 7.402: 99.9258% ( 1) 00:13:30.213 7.624 - 7.680: 99.9320% ( 1) 00:13:30.213 [2024-07-15 11:24:13.511290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.213 3989.148 - 4017.642: 100.0000% ( 11) 00:13:30.213 00:13:30.213 Complete histogram 00:13:30.213 ================== 00:13:30.213 Range in us Cumulative Count 00:13:30.213 1.809 - 1.823: 0.0062% ( 1) 00:13:30.214 1.823 - 1.837: 1.1806% ( 190) 00:13:30.214 1.837 - 1.850: 5.2540% ( 659) 00:13:30.214 1.850 - 1.864: 7.1393% ( 305) 00:13:30.214 1.864 - 1.878: 8.7650% ( 263) 00:13:30.214 1.878 - 1.892: 36.3766% ( 4467) 00:13:30.214 1.892 - 1.906: 83.4528% ( 7616) 00:13:30.214 1.906 - 1.920: 93.2192% ( 1580) 00:13:30.214 1.920 - 1.934: 95.4506% ( 361) 00:13:30.214 1.934 - 1.948: 96.0564% ( 98) 00:13:30.214 1.948 - 1.962: 96.9094% ( 138) 00:13:30.214 1.962 - 1.976: 98.0838% ( 190) 00:13:30.214 1.976 - 1.990: 98.9121% ( 134) 00:13:30.214 1.990 - 2.003: 99.1532% ( 39) 00:13:30.214 2.003 - 2.017: 99.2583% ( 17) 00:13:30.214 2.017 - 2.031: 99.2892% ( 5) 00:13:30.214 2.031 - 2.045: 99.2953% ( 1) 00:13:30.214 2.045 - 2.059: 99.3015% ( 1) 00:13:30.214 2.059 - 2.073: 99.3077% ( 1) 00:13:30.214 2.073 - 2.087: 99.3201% ( 2) 00:13:30.214 2.101 - 2.115: 99.3262% ( 1) 00:13:30.214 3.172 - 3.186: 99.3324% ( 1) 00:13:30.214 3.381 - 3.395: 99.3386% ( 1) 00:13:30.214 3.492 - 3.506: 99.3448% ( 1) 00:13:30.214 3.548 - 3.562: 99.3510% ( 1) 00:13:30.214 3.617 - 3.645: 99.3572% ( 1) 00:13:30.214 3.645 - 3.673: 99.3633% ( 1) 00:13:30.214 3.840 - 3.868: 99.3695% ( 1) 00:13:30.214 3.868 - 3.896: 99.3757% ( 1) 00:13:30.214 4.007 - 4.035: 99.3819% ( 1) 00:13:30.214 4.202 - 4.230: 99.3881% ( 1) 00:13:30.214 4.230 - 4.257: 99.3942% ( 1) 00:13:30.214 4.424 - 4.452: 99.4004% ( 1) 00:13:30.214 4.591 - 4.619: 99.4066% ( 1) 00:13:30.214 4.619 - 4.647: 99.4190% ( 2) 00:13:30.214 4.647 - 4.675: 99.4251% ( 1) 00:13:30.214 4.758 - 4.786: 99.4313% ( 1) 00:13:30.214 4.870 - 4.897: 99.4375% ( 1) 00:13:30.214 4.925 - 4.953: 99.4437% ( 1) 00:13:30.214 5.037 - 5.064: 99.4499% ( 1) 00:13:30.214 5.092 - 5.120: 99.4561% ( 1) 00:13:30.214 5.120 - 5.148: 99.4622% ( 1) 00:13:30.214 5.398 - 5.426: 99.4684% ( 1) 00:13:30.214 6.873 - 6.901: 99.4746% ( 1) 00:13:30.214 7.290 - 7.346: 99.4808% ( 1) 00:13:30.214 3989.148 - 4017.642: 99.9938% ( 83) 00:13:30.214 4017.642 - 4046.136: 100.0000% ( 1) 00:13:30.214 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:30.214 [ 00:13:30.214 { 00:13:30.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:30.214 "subtype": "Discovery", 00:13:30.214 "listen_addresses": [], 00:13:30.214 "allow_any_host": true, 00:13:30.214 "hosts": [] 00:13:30.214 }, 00:13:30.214 { 00:13:30.214 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:30.214 "subtype": "NVMe", 00:13:30.214 "listen_addresses": [ 00:13:30.214 { 00:13:30.214 "trtype": "VFIOUSER", 00:13:30.214 "adrfam": "IPv4", 00:13:30.214 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:30.214 "trsvcid": "0" 00:13:30.214 } 00:13:30.214 ], 00:13:30.214 "allow_any_host": true, 00:13:30.214 "hosts": [], 00:13:30.214 "serial_number": "SPDK1", 00:13:30.214 "model_number": "SPDK bdev Controller", 00:13:30.214 "max_namespaces": 32, 00:13:30.214 "min_cntlid": 1, 00:13:30.214 "max_cntlid": 65519, 00:13:30.214 "namespaces": [ 00:13:30.214 { 00:13:30.214 "nsid": 1, 00:13:30.214 "bdev_name": "Malloc1", 00:13:30.214 "name": "Malloc1", 00:13:30.214 "nguid": "9C6B8CA9DD3E437B8C867F8ADA036AA7", 00:13:30.214 "uuid": "9c6b8ca9-dd3e-437b-8c86-7f8ada036aa7" 00:13:30.214 }, 00:13:30.214 { 00:13:30.214 "nsid": 2, 00:13:30.214 "bdev_name": "Malloc3", 00:13:30.214 "name": "Malloc3", 00:13:30.214 "nguid": "28EE5343BAD44B31A464CF54AEC701D6", 00:13:30.214 "uuid": "28ee5343-bad4-4b31-a464-cf54aec701d6" 00:13:30.214 } 00:13:30.214 ] 00:13:30.214 }, 00:13:30.214 { 00:13:30.214 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:30.214 "subtype": "NVMe", 00:13:30.214 "listen_addresses": [ 00:13:30.214 { 00:13:30.214 "trtype": "VFIOUSER", 00:13:30.214 "adrfam": "IPv4", 00:13:30.214 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:30.214 "trsvcid": "0" 00:13:30.214 } 00:13:30.214 ], 00:13:30.214 "allow_any_host": true, 00:13:30.214 "hosts": [], 00:13:30.214 "serial_number": "SPDK2", 00:13:30.214 "model_number": "SPDK bdev Controller", 00:13:30.214 "max_namespaces": 32, 00:13:30.214 "min_cntlid": 1, 00:13:30.214 "max_cntlid": 65519, 00:13:30.214 "namespaces": [ 00:13:30.214 { 00:13:30.214 "nsid": 1, 00:13:30.214 "bdev_name": "Malloc2", 00:13:30.214 "name": "Malloc2", 00:13:30.214 "nguid": "110BD4E391954A47871123B796C2ABBB", 00:13:30.214 "uuid": "110bd4e3-9195-4a47-8711-23b796c2abbb" 00:13:30.214 } 00:13:30.214 ] 00:13:30.214 } 00:13:30.214 ] 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=534376 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:30.214 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:30.214 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.472 [2024-07-15 11:24:13.882624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.472 Malloc4 00:13:30.472 11:24:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:30.729 [2024-07-15 11:24:14.117398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.729 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:30.729 Asynchronous Event Request test 00:13:30.729 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.729 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.729 Registering asynchronous event callbacks... 00:13:30.729 Starting namespace attribute notice tests for all controllers... 00:13:30.729 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:30.729 aer_cb - Changed Namespace 00:13:30.729 Cleaning up... 00:13:30.729 [ 00:13:30.729 { 00:13:30.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:30.729 "subtype": "Discovery", 00:13:30.729 "listen_addresses": [], 00:13:30.729 "allow_any_host": true, 00:13:30.729 "hosts": [] 00:13:30.729 }, 00:13:30.729 { 00:13:30.729 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:30.729 "subtype": "NVMe", 00:13:30.729 "listen_addresses": [ 00:13:30.729 { 00:13:30.729 "trtype": "VFIOUSER", 00:13:30.729 "adrfam": "IPv4", 00:13:30.729 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:30.729 "trsvcid": "0" 00:13:30.729 } 00:13:30.729 ], 00:13:30.729 "allow_any_host": true, 00:13:30.729 "hosts": [], 00:13:30.729 "serial_number": "SPDK1", 00:13:30.729 "model_number": "SPDK bdev Controller", 00:13:30.729 "max_namespaces": 32, 00:13:30.729 "min_cntlid": 1, 00:13:30.729 "max_cntlid": 65519, 00:13:30.729 "namespaces": [ 00:13:30.729 { 00:13:30.729 "nsid": 1, 00:13:30.729 "bdev_name": "Malloc1", 00:13:30.729 "name": "Malloc1", 00:13:30.729 "nguid": "9C6B8CA9DD3E437B8C867F8ADA036AA7", 00:13:30.729 "uuid": "9c6b8ca9-dd3e-437b-8c86-7f8ada036aa7" 00:13:30.729 }, 00:13:30.729 { 00:13:30.729 "nsid": 2, 00:13:30.729 "bdev_name": "Malloc3", 00:13:30.729 "name": "Malloc3", 00:13:30.729 "nguid": "28EE5343BAD44B31A464CF54AEC701D6", 00:13:30.729 "uuid": "28ee5343-bad4-4b31-a464-cf54aec701d6" 00:13:30.729 } 00:13:30.729 ] 00:13:30.729 }, 00:13:30.729 { 00:13:30.729 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:30.729 "subtype": "NVMe", 00:13:30.729 "listen_addresses": [ 00:13:30.729 { 00:13:30.729 "trtype": "VFIOUSER", 00:13:30.729 "adrfam": "IPv4", 00:13:30.730 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:30.730 "trsvcid": "0" 00:13:30.730 } 00:13:30.730 ], 00:13:30.730 "allow_any_host": true, 00:13:30.730 "hosts": [], 00:13:30.730 "serial_number": "SPDK2", 00:13:30.730 "model_number": "SPDK bdev Controller", 00:13:30.730 "max_namespaces": 32, 00:13:30.730 "min_cntlid": 1, 00:13:30.730 "max_cntlid": 65519, 00:13:30.730 "namespaces": [ 00:13:30.730 { 00:13:30.730 "nsid": 1, 00:13:30.730 "bdev_name": "Malloc2", 00:13:30.730 "name": "Malloc2", 00:13:30.730 "nguid": "110BD4E391954A47871123B796C2ABBB", 00:13:30.730 "uuid": "110bd4e3-9195-4a47-8711-23b796c2abbb" 00:13:30.730 }, 00:13:30.730 { 00:13:30.730 "nsid": 2, 00:13:30.730 "bdev_name": "Malloc4", 00:13:30.730 "name": "Malloc4", 00:13:30.730 "nguid": "5FEEBDBDF26F4E57B1AE786F9C40E0D2", 00:13:30.730 "uuid": "5feebdbd-f26f-4e57-b1ae-786f9c40e0d2" 00:13:30.730 } 00:13:30.730 ] 00:13:30.730 } 00:13:30.730 ] 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 534376 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 526084 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 526084 ']' 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 526084 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:30.730 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 526084 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 526084' 00:13:30.987 killing process with pid 526084 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 526084 00:13:30.987 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 526084 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=534463 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 534463' 00:13:31.245 Process pid: 534463 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 534463 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 534463 ']' 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.245 11:24:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:31.245 [2024-07-15 11:24:14.666858] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:31.245 [2024-07-15 11:24:14.667691] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:31.245 [2024-07-15 11:24:14.667726] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.245 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.245 [2024-07-15 11:24:14.735509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.245 [2024-07-15 11:24:14.814584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.245 [2024-07-15 11:24:14.814624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.245 [2024-07-15 11:24:14.814633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.245 [2024-07-15 11:24:14.814639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.245 [2024-07-15 11:24:14.814643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.245 [2024-07-15 11:24:14.814701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.245 [2024-07-15 11:24:14.814811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.245 [2024-07-15 11:24:14.814915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.245 [2024-07-15 11:24:14.814916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.503 [2024-07-15 11:24:14.901045] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:31.503 [2024-07-15 11:24:14.901425] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:31.503 [2024-07-15 11:24:14.901604] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:31.503 [2024-07-15 11:24:14.901648] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:31.503 [2024-07-15 11:24:14.901994] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:32.069 11:24:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.069 11:24:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:32.069 11:24:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:33.004 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:33.262 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:33.262 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:33.262 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:33.262 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:33.263 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:33.263 Malloc1 00:13:33.263 11:24:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:33.521 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:33.779 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:34.037 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:34.037 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:34.037 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:34.037 Malloc2 00:13:34.037 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:34.295 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:34.554 11:24:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 534463 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 534463 ']' 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 534463 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 534463 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 534463' 00:13:34.813 killing process with pid 534463 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 534463 00:13:34.813 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 534463 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:35.072 00:13:35.072 real 0m51.268s 00:13:35.072 user 3m22.816s 00:13:35.072 sys 0m3.600s 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 ************************************ 00:13:35.072 END TEST nvmf_vfio_user 00:13:35.072 ************************************ 00:13:35.072 11:24:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.072 11:24:18 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:35.072 11:24:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.072 11:24:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.072 11:24:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 ************************************ 00:13:35.072 START TEST nvmf_vfio_user_nvme_compliance 00:13:35.072 ************************************ 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:35.072 * Looking for test storage... 00:13:35.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.072 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=535225 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 535225' 00:13:35.073 Process pid: 535225 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 535225 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 535225 ']' 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.073 11:24:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:35.073 [2024-07-15 11:24:18.662262] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:35.073 [2024-07-15 11:24:18.662312] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.330 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.330 [2024-07-15 11:24:18.729095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.330 [2024-07-15 11:24:18.808273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.330 [2024-07-15 11:24:18.808306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.330 [2024-07-15 11:24:18.808314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.330 [2024-07-15 11:24:18.808319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.330 [2024-07-15 11:24:18.808324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.330 [2024-07-15 11:24:18.808369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.330 [2024-07-15 11:24:18.808477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.330 [2024-07-15 11:24:18.808478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.896 11:24:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.896 11:24:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:35.896 11:24:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 malloc0 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.282 11:24:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:37.282 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.283 00:13:37.283 00:13:37.283 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.283 http://cunit.sourceforge.net/ 00:13:37.283 00:13:37.283 00:13:37.283 Suite: nvme_compliance 00:13:37.283 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 11:24:20.706703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.283 [2024-07-15 11:24:20.708030] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:37.283 [2024-07-15 11:24:20.708046] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:37.283 [2024-07-15 11:24:20.708052] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:37.283 [2024-07-15 11:24:20.709728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.283 passed 00:13:37.283 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 11:24:20.789268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.283 [2024-07-15 11:24:20.792289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.283 passed 00:13:37.283 Test: admin_identify_ns ...[2024-07-15 11:24:20.872034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.541 [2024-07-15 11:24:20.932234] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:37.541 [2024-07-15 11:24:20.940236] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:37.541 [2024-07-15 11:24:20.961327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.541 passed 00:13:37.541 Test: admin_get_features_mandatory_features ...[2024-07-15 11:24:21.038271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.541 [2024-07-15 11:24:21.041285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.541 passed 00:13:37.541 Test: admin_get_features_optional_features ...[2024-07-15 11:24:21.118794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.541 [2024-07-15 11:24:21.121813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.800 passed 00:13:37.800 Test: admin_set_features_number_of_queues ...[2024-07-15 11:24:21.200598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.800 [2024-07-15 11:24:21.306326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.800 passed 00:13:37.800 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 11:24:21.382265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.800 [2024-07-15 11:24:21.385295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.058 passed 00:13:38.058 Test: admin_get_log_page_with_lpo ...[2024-07-15 11:24:21.464057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.058 [2024-07-15 11:24:21.532238] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:38.058 [2024-07-15 11:24:21.545299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.058 passed 00:13:38.058 Test: fabric_property_get ...[2024-07-15 11:24:21.622206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.058 [2024-07-15 11:24:21.623459] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:38.058 [2024-07-15 11:24:21.625230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.317 passed 00:13:38.317 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 11:24:21.700752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.317 [2024-07-15 11:24:21.701976] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:38.317 [2024-07-15 11:24:21.703770] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.317 passed 00:13:38.317 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 11:24:21.781624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.317 [2024-07-15 11:24:21.866234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:38.317 [2024-07-15 11:24:21.882238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:38.317 [2024-07-15 11:24:21.887313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.575 passed 00:13:38.575 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 11:24:21.961596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.575 [2024-07-15 11:24:21.962828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:38.575 [2024-07-15 11:24:21.964620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.575 passed 00:13:38.575 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 11:24:22.043433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.575 [2024-07-15 11:24:22.120235] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:38.575 [2024-07-15 11:24:22.144244] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:38.575 [2024-07-15 11:24:22.149319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.834 passed 00:13:38.834 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 11:24:22.224413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.834 [2024-07-15 11:24:22.225646] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:38.834 [2024-07-15 11:24:22.225669] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:38.834 [2024-07-15 11:24:22.227435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:38.834 passed 00:13:38.834 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 11:24:22.304230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:38.834 [2024-07-15 11:24:22.397192] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:38.834 [2024-07-15 11:24:22.406233] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:38.834 [2024-07-15 11:24:22.414234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:38.834 [2024-07-15 11:24:22.422230] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:39.092 [2024-07-15 11:24:22.451311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.092 passed 00:13:39.092 Test: admin_create_io_sq_verify_pc ...[2024-07-15 11:24:22.528279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.092 [2024-07-15 11:24:22.545238] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:39.092 [2024-07-15 11:24:22.562453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.092 passed 00:13:39.092 Test: admin_create_io_qp_max_qps ...[2024-07-15 11:24:22.639967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.469 [2024-07-15 11:24:23.743233] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:40.728 [2024-07-15 11:24:24.131201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.728 passed 00:13:40.728 Test: admin_create_io_sq_shared_cq ...[2024-07-15 11:24:24.205558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.987 [2024-07-15 11:24:24.341239] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:40.987 [2024-07-15 11:24:24.378305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.987 passed 00:13:40.987 00:13:40.987 Run Summary: Type Total Ran Passed Failed Inactive 00:13:40.987 suites 1 1 n/a 0 0 00:13:40.987 tests 18 18 18 0 0 00:13:40.987 asserts 360 360 360 0 n/a 00:13:40.987 00:13:40.987 Elapsed time = 1.507 seconds 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 535225 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 535225 ']' 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 535225 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 535225 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 535225' 00:13:40.987 killing process with pid 535225 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 535225 00:13:40.987 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 535225 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:41.246 00:13:41.246 real 0m6.175s 00:13:41.246 user 0m17.580s 00:13:41.246 sys 0m0.491s 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 ************************************ 00:13:41.246 END TEST nvmf_vfio_user_nvme_compliance 00:13:41.246 ************************************ 00:13:41.246 11:24:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.246 11:24:24 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:41.246 11:24:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.246 11:24:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.246 11:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 ************************************ 00:13:41.246 START TEST nvmf_vfio_user_fuzz 00:13:41.246 ************************************ 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:41.246 * Looking for test storage... 00:13:41.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.246 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=536325 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 536325' 00:13:41.506 Process pid: 536325 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 536325 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 536325 ']' 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.506 11:24:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:42.480 11:24:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.480 11:24:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:42.480 11:24:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.422 malloc0 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:43.422 11:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:15.500 Fuzzing completed. Shutting down the fuzz application 00:14:15.500 00:14:15.500 Dumping successful admin opcodes: 00:14:15.500 8, 9, 10, 24, 00:14:15.500 Dumping successful io opcodes: 00:14:15.500 0, 00:14:15.500 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1031736, total successful commands: 4057, random_seed: 1211744320 00:14:15.500 NS: 0x200003a1ef00 admin qp, Total commands completed: 255955, total successful commands: 2066, random_seed: 4265863616 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 536325 ']' 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 536325' 00:14:15.500 killing process with pid 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 536325 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:15.500 00:14:15.500 real 0m32.805s 00:14:15.500 user 0m31.270s 00:14:15.500 sys 0m31.051s 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.500 11:24:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:15.500 ************************************ 00:14:15.500 END TEST nvmf_vfio_user_fuzz 00:14:15.500 ************************************ 00:14:15.500 11:24:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.500 11:24:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:15.500 11:24:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.500 11:24:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.500 11:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.500 ************************************ 00:14:15.500 START TEST nvmf_host_management 00:14:15.500 ************************************ 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:15.500 * Looking for test storage... 00:14:15.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.500 11:24:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.501 11:24:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.694 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:19.695 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:19.695 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:19.695 Found net devices under 0000:86:00.0: cvl_0_0 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:19.695 Found net devices under 0000:86:00.1: cvl_0_1 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.695 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:14:19.953 00:14:19.953 --- 10.0.0.2 ping statistics --- 00:14:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.953 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:19.953 00:14:19.953 --- 10.0.0.1 ping statistics --- 00:14:19.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.953 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=544732 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 544732 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:19.953 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 544732 ']' 00:14:19.954 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.954 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.954 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.954 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.954 11:25:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.954 [2024-07-15 11:25:03.535350] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:19.954 [2024-07-15 11:25:03.535389] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.211 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.211 [2024-07-15 11:25:03.606866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.212 [2024-07-15 11:25:03.687541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.212 [2024-07-15 11:25:03.687581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.212 [2024-07-15 11:25:03.687588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.212 [2024-07-15 11:25:03.687594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.212 [2024-07-15 11:25:03.687599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.212 [2024-07-15 11:25:03.689246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.212 [2024-07-15 11:25:03.689337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.212 [2024-07-15 11:25:03.689452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.212 [2024-07-15 11:25:03.689453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:20.777 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.777 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:20.777 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.778 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.778 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 [2024-07-15 11:25:04.388269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 Malloc0 00:14:21.037 [2024-07-15 11:25:04.448329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=544996 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 544996 /var/tmp/bdevperf.sock 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 544996 ']' 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.037 { 00:14:21.037 "params": { 00:14:21.037 "name": "Nvme$subsystem", 00:14:21.037 "trtype": "$TEST_TRANSPORT", 00:14:21.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.037 "adrfam": "ipv4", 00:14:21.037 "trsvcid": "$NVMF_PORT", 00:14:21.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.037 "hdgst": ${hdgst:-false}, 00:14:21.037 "ddgst": ${ddgst:-false} 00:14:21.037 }, 00:14:21.037 "method": "bdev_nvme_attach_controller" 00:14:21.037 } 00:14:21.037 EOF 00:14:21.037 )") 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:21.037 11:25:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.037 "params": { 00:14:21.037 "name": "Nvme0", 00:14:21.037 "trtype": "tcp", 00:14:21.037 "traddr": "10.0.0.2", 00:14:21.037 "adrfam": "ipv4", 00:14:21.037 "trsvcid": "4420", 00:14:21.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:21.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:21.037 "hdgst": false, 00:14:21.037 "ddgst": false 00:14:21.037 }, 00:14:21.037 "method": "bdev_nvme_attach_controller" 00:14:21.037 }' 00:14:21.037 [2024-07-15 11:25:04.538377] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:21.037 [2024-07-15 11:25:04.538420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544996 ] 00:14:21.037 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.037 [2024-07-15 11:25:04.607194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.296 [2024-07-15 11:25:04.681796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.555 Running I/O for 10 seconds... 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:21.814 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:21.815 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:22.076 [2024-07-15 11:25:05.435658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.435760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2262460 is same with the state(5) to be set 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.076 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:22.076 [2024-07-15 11:25:05.444410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.076 [2024-07-15 11:25:05.444445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.076 [2024-07-15 11:25:05.444468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.076 [2024-07-15 11:25:05.444484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.076 [2024-07-15 11:25:05.444499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1950980 is same with the state(5) to be set 00:14:22.076 [2024-07-15 11:25:05.444540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.076 [2024-07-15 11:25:05.444889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.076 [2024-07-15 11:25:05.444898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.444915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.444932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.444951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.444968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.444985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.444994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.077 [2024-07-15 11:25:05.445564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.077 [2024-07-15 11:25:05.445572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.078 [2024-07-15 11:25:05.445683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.078 [2024-07-15 11:25:05.445745] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d61b20 was disconnected and freed. reset controller. 00:14:22.078 [2024-07-15 11:25:05.446648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:22.078 task offset: 106496 on job bdev=Nvme0n1 fails 00:14:22.078 00:14:22.078 Latency(us) 00:14:22.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.078 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:22.078 Job: Nvme0n1 ended in about 0.44 seconds with error 00:14:22.078 Verification LBA range: start 0x0 length 0x400 00:14:22.078 Nvme0n1 : 0.44 1884.02 117.75 144.92 0.00 30742.69 1816.49 27126.21 00:14:22.078 =================================================================================================================== 00:14:22.078 Total : 1884.02 117.75 144.92 0.00 30742.69 1816.49 27126.21 00:14:22.078 11:25:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.078 [2024-07-15 11:25:05.448265] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:22.078 [2024-07-15 11:25:05.448283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1950980 (9): Bad file descriptor 00:14:22.078 11:25:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:22.078 [2024-07-15 11:25:05.458652] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 544996 00:14:23.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (544996) - No such process 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:23.015 { 00:14:23.015 "params": { 00:14:23.015 "name": "Nvme$subsystem", 00:14:23.015 "trtype": "$TEST_TRANSPORT", 00:14:23.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:23.015 "adrfam": "ipv4", 00:14:23.015 "trsvcid": "$NVMF_PORT", 00:14:23.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:23.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:23.015 "hdgst": ${hdgst:-false}, 00:14:23.015 "ddgst": ${ddgst:-false} 00:14:23.015 }, 00:14:23.015 "method": "bdev_nvme_attach_controller" 00:14:23.015 } 00:14:23.015 EOF 00:14:23.015 )") 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:23.015 11:25:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:23.015 "params": { 00:14:23.015 "name": "Nvme0", 00:14:23.015 "trtype": "tcp", 00:14:23.015 "traddr": "10.0.0.2", 00:14:23.015 "adrfam": "ipv4", 00:14:23.015 "trsvcid": "4420", 00:14:23.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:23.015 "hdgst": false, 00:14:23.015 "ddgst": false 00:14:23.015 }, 00:14:23.015 "method": "bdev_nvme_attach_controller" 00:14:23.015 }' 00:14:23.015 [2024-07-15 11:25:06.500008] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:23.015 [2024-07-15 11:25:06.500057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545248 ] 00:14:23.015 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.015 [2024-07-15 11:25:06.568605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.275 [2024-07-15 11:25:06.639407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.534 Running I/O for 1 seconds... 00:14:24.472 00:14:24.472 Latency(us) 00:14:24.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.472 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:24.472 Verification LBA range: start 0x0 length 0x400 00:14:24.472 Nvme0n1 : 1.03 1931.78 120.74 0.00 0.00 32617.87 7094.98 27240.18 00:14:24.472 =================================================================================================================== 00:14:24.472 Total : 1931.78 120.74 0.00 0.00 32617.87 7094.98 27240.18 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.732 rmmod nvme_tcp 00:14:24.732 rmmod nvme_fabrics 00:14:24.732 rmmod nvme_keyring 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 544732 ']' 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 544732 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 544732 ']' 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 544732 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.732 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544732 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544732' 00:14:24.992 killing process with pid 544732 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 544732 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 544732 00:14:24.992 [2024-07-15 11:25:08.507774] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.992 11:25:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.565 11:25:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.565 11:25:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:27.565 00:14:27.565 real 0m12.988s 00:14:27.565 user 0m23.565s 00:14:27.565 sys 0m5.452s 00:14:27.565 11:25:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.565 11:25:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:27.565 ************************************ 00:14:27.565 END TEST nvmf_host_management 00:14:27.565 ************************************ 00:14:27.565 11:25:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.565 11:25:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:27.565 11:25:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.565 11:25:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.565 11:25:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.565 ************************************ 00:14:27.565 START TEST nvmf_lvol 00:14:27.565 ************************************ 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:27.565 * Looking for test storage... 00:14:27.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.565 11:25:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.566 11:25:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:32.838 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.838 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:32.838 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:32.839 Found net devices under 0000:86:00.0: cvl_0_0 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:32.839 Found net devices under 0000:86:00.1: cvl_0_1 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.839 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:14:33.098 00:14:33.098 --- 10.0.0.2 ping statistics --- 00:14:33.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.098 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:14:33.098 00:14:33.098 --- 10.0.0.1 ping statistics --- 00:14:33.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.098 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=549010 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 549010 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 549010 ']' 00:14:33.098 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.099 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.099 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.099 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.099 11:25:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.099 [2024-07-15 11:25:16.606356] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:33.099 [2024-07-15 11:25:16.606401] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.099 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.099 [2024-07-15 11:25:16.678910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.357 [2024-07-15 11:25:16.758902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.357 [2024-07-15 11:25:16.758938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.357 [2024-07-15 11:25:16.758945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.357 [2024-07-15 11:25:16.758951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.357 [2024-07-15 11:25:16.758957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.357 [2024-07-15 11:25:16.759009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.357 [2024-07-15 11:25:16.759116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.357 [2024-07-15 11:25:16.759117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.925 11:25:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:34.184 [2024-07-15 11:25:17.601016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.184 11:25:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:34.443 11:25:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:34.443 11:25:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:34.701 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:34.701 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:34.701 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:34.958 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=38749686-0f2a-4179-abf9-43549e6f2c83 00:14:34.958 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 38749686-0f2a-4179-abf9-43549e6f2c83 lvol 20 00:14:35.216 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8b3a9f88-79c6-42bf-afcd-81f0f00a2498 00:14:35.216 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:35.216 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b3a9f88-79c6-42bf-afcd-81f0f00a2498 00:14:35.473 11:25:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:35.731 [2024-07-15 11:25:19.117841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.731 11:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.990 11:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=549497 00:14:35.990 11:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:35.990 11:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:35.990 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.926 11:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8b3a9f88-79c6-42bf-afcd-81f0f00a2498 MY_SNAPSHOT 00:14:37.185 11:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3492e595-bf42-4b48-bd02-f435bbe49d4b 00:14:37.185 11:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8b3a9f88-79c6-42bf-afcd-81f0f00a2498 30 00:14:37.444 11:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3492e595-bf42-4b48-bd02-f435bbe49d4b MY_CLONE 00:14:37.444 11:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=62713c08-030d-4050-b6b0-c117b21f8e0e 00:14:37.444 11:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 62713c08-030d-4050-b6b0-c117b21f8e0e 00:14:38.011 11:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 549497 00:14:46.129 Initializing NVMe Controllers 00:14:46.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:46.129 Controller IO queue size 128, less than required. 00:14:46.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:46.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:46.129 Initialization complete. Launching workers. 00:14:46.129 ======================================================== 00:14:46.129 Latency(us) 00:14:46.129 Device Information : IOPS MiB/s Average min max 00:14:46.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12056.00 47.09 10621.25 1798.74 65509.26 00:14:46.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11930.20 46.60 10730.53 3719.02 58082.68 00:14:46.129 ======================================================== 00:14:46.129 Total : 23986.20 93.70 10675.60 1798.74 65509.26 00:14:46.129 00:14:46.129 11:25:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.388 11:25:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b3a9f88-79c6-42bf-afcd-81f0f00a2498 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38749686-0f2a-4179-abf9-43549e6f2c83 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.647 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.906 rmmod nvme_tcp 00:14:46.906 rmmod nvme_fabrics 00:14:46.906 rmmod nvme_keyring 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 549010 ']' 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 549010 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 549010 ']' 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 549010 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 549010 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 549010' 00:14:46.906 killing process with pid 549010 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 549010 00:14:46.906 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 549010 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.165 11:25:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.070 11:25:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.070 00:14:49.070 real 0m21.967s 00:14:49.070 user 1m4.208s 00:14:49.070 sys 0m6.962s 00:14:49.070 11:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.070 11:25:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:49.070 ************************************ 00:14:49.070 END TEST nvmf_lvol 00:14:49.070 ************************************ 00:14:49.328 11:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:49.328 11:25:32 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:49.328 11:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.328 11:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.328 11:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.328 ************************************ 00:14:49.328 START TEST nvmf_lvs_grow 00:14:49.328 ************************************ 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:49.328 * Looking for test storage... 00:14:49.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.328 11:25:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.329 11:25:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:55.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:55.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:55.921 Found net devices under 0000:86:00.0: cvl_0_0 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:55.921 Found net devices under 0000:86:00.1: cvl_0_1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:55.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:14:55.921 00:14:55.921 --- 10.0.0.2 ping statistics --- 00:14:55.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.921 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:14:55.921 00:14:55.921 --- 10.0.0.1 ping statistics --- 00:14:55.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.921 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=554856 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 554856 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 554856 ']' 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.921 11:25:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 [2024-07-15 11:25:38.644191] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:55.921 [2024-07-15 11:25:38.644237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.921 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.921 [2024-07-15 11:25:38.713066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.921 [2024-07-15 11:25:38.783896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.921 [2024-07-15 11:25:38.783936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.921 [2024-07-15 11:25:38.783942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.921 [2024-07-15 11:25:38.783948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.921 [2024-07-15 11:25:38.783953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.921 [2024-07-15 11:25:38.783972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.921 11:25:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.180 [2024-07-15 11:25:39.651462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.180 ************************************ 00:14:56.180 START TEST lvs_grow_clean 00:14:56.180 ************************************ 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.180 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.438 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.438 11:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:56.696 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 lvol 150 00:14:56.955 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=925e0786-6741-48d3-8e0d-57a1e8dadcf0 00:14:56.955 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.955 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:57.213 [2024-07-15 11:25:40.617093] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:57.213 [2024-07-15 11:25:40.617145] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:57.213 true 00:14:57.213 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:14:57.213 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:57.472 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:57.472 11:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:57.472 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 925e0786-6741-48d3-8e0d-57a1e8dadcf0 00:14:57.730 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:57.989 [2024-07-15 11:25:41.335251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=555374 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 555374 /var/tmp/bdevperf.sock 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 555374 ']' 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.989 11:25:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:57.989 [2024-07-15 11:25:41.577685] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:57.989 [2024-07-15 11:25:41.577734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555374 ] 00:14:58.277 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.277 [2024-07-15 11:25:41.643500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.277 [2024-07-15 11:25:41.715707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.845 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.845 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:58.845 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:59.411 Nvme0n1 00:14:59.411 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:59.411 [ 00:14:59.411 { 00:14:59.411 "name": "Nvme0n1", 00:14:59.411 "aliases": [ 00:14:59.411 "925e0786-6741-48d3-8e0d-57a1e8dadcf0" 00:14:59.411 ], 00:14:59.411 "product_name": "NVMe disk", 00:14:59.411 "block_size": 4096, 00:14:59.411 "num_blocks": 38912, 00:14:59.411 "uuid": "925e0786-6741-48d3-8e0d-57a1e8dadcf0", 00:14:59.411 "assigned_rate_limits": { 00:14:59.411 "rw_ios_per_sec": 0, 00:14:59.411 "rw_mbytes_per_sec": 0, 00:14:59.411 "r_mbytes_per_sec": 0, 00:14:59.411 "w_mbytes_per_sec": 0 00:14:59.411 }, 00:14:59.411 "claimed": false, 00:14:59.411 "zoned": false, 00:14:59.411 "supported_io_types": { 00:14:59.411 "read": true, 00:14:59.411 "write": true, 00:14:59.411 "unmap": true, 00:14:59.411 "flush": true, 00:14:59.411 "reset": true, 00:14:59.411 "nvme_admin": true, 00:14:59.411 "nvme_io": true, 00:14:59.411 "nvme_io_md": false, 00:14:59.411 "write_zeroes": true, 00:14:59.411 "zcopy": false, 00:14:59.411 "get_zone_info": false, 00:14:59.411 "zone_management": false, 00:14:59.411 "zone_append": false, 00:14:59.411 "compare": true, 00:14:59.411 "compare_and_write": true, 00:14:59.411 "abort": true, 00:14:59.411 "seek_hole": false, 00:14:59.411 "seek_data": false, 00:14:59.411 "copy": true, 00:14:59.411 "nvme_iov_md": false 00:14:59.411 }, 00:14:59.411 "memory_domains": [ 00:14:59.411 { 00:14:59.411 "dma_device_id": "system", 00:14:59.411 "dma_device_type": 1 00:14:59.411 } 00:14:59.411 ], 00:14:59.411 "driver_specific": { 00:14:59.411 "nvme": [ 00:14:59.411 { 00:14:59.411 "trid": { 00:14:59.411 "trtype": "TCP", 00:14:59.411 "adrfam": "IPv4", 00:14:59.411 "traddr": "10.0.0.2", 00:14:59.411 "trsvcid": "4420", 00:14:59.411 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:59.411 }, 00:14:59.411 "ctrlr_data": { 00:14:59.411 "cntlid": 1, 00:14:59.411 "vendor_id": "0x8086", 00:14:59.411 "model_number": "SPDK bdev Controller", 00:14:59.411 "serial_number": "SPDK0", 00:14:59.411 "firmware_revision": "24.09", 00:14:59.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:59.411 "oacs": { 00:14:59.411 "security": 0, 00:14:59.411 "format": 0, 00:14:59.411 "firmware": 0, 00:14:59.411 "ns_manage": 0 00:14:59.411 }, 00:14:59.411 "multi_ctrlr": true, 00:14:59.411 "ana_reporting": false 00:14:59.411 }, 00:14:59.411 "vs": { 00:14:59.411 "nvme_version": "1.3" 00:14:59.411 }, 00:14:59.411 "ns_data": { 00:14:59.411 "id": 1, 00:14:59.411 "can_share": true 00:14:59.411 } 00:14:59.411 } 00:14:59.411 ], 00:14:59.411 "mp_policy": "active_passive" 00:14:59.411 } 00:14:59.411 } 00:14:59.411 ] 00:14:59.411 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=555608 00:14:59.411 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:59.411 11:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.685 Running I/O for 10 seconds... 00:15:00.620 Latency(us) 00:15:00.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.620 Nvme0n1 : 1.00 23404.00 91.42 0.00 0.00 0.00 0.00 0.00 00:15:00.620 =================================================================================================================== 00:15:00.620 Total : 23404.00 91.42 0.00 0.00 0.00 0.00 0.00 00:15:00.620 00:15:01.556 11:25:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:01.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.556 Nvme0n1 : 2.00 23553.50 92.01 0.00 0.00 0.00 0.00 0.00 00:15:01.556 =================================================================================================================== 00:15:01.556 Total : 23553.50 92.01 0.00 0.00 0.00 0.00 0.00 00:15:01.556 00:15:01.556 true 00:15:01.816 11:25:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:01.816 11:25:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:01.816 11:25:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:01.816 11:25:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:01.816 11:25:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 555608 00:15:02.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.753 Nvme0n1 : 3.00 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:15:02.753 =================================================================================================================== 00:15:02.753 Total : 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:15:02.753 00:15:03.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.691 Nvme0n1 : 4.00 23631.00 92.31 0.00 0.00 0.00 0.00 0.00 00:15:03.691 =================================================================================================================== 00:15:03.691 Total : 23631.00 92.31 0.00 0.00 0.00 0.00 0.00 00:15:03.691 00:15:04.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.630 Nvme0n1 : 5.00 23656.40 92.41 0.00 0.00 0.00 0.00 0.00 00:15:04.630 =================================================================================================================== 00:15:04.630 Total : 23656.40 92.41 0.00 0.00 0.00 0.00 0.00 00:15:04.630 00:15:05.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.568 Nvme0n1 : 6.00 23694.00 92.55 0.00 0.00 0.00 0.00 0.00 00:15:05.568 =================================================================================================================== 00:15:05.568 Total : 23694.00 92.55 0.00 0.00 0.00 0.00 0.00 00:15:05.568 00:15:06.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.506 Nvme0n1 : 7.00 23705.57 92.60 0.00 0.00 0.00 0.00 0.00 00:15:06.506 =================================================================================================================== 00:15:06.506 Total : 23705.57 92.60 0.00 0.00 0.00 0.00 0.00 00:15:06.506 00:15:07.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.884 Nvme0n1 : 8.00 23720.25 92.66 0.00 0.00 0.00 0.00 0.00 00:15:07.884 =================================================================================================================== 00:15:07.884 Total : 23720.25 92.66 0.00 0.00 0.00 0.00 0.00 00:15:07.884 00:15:08.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.821 Nvme0n1 : 9.00 23731.22 92.70 0.00 0.00 0.00 0.00 0.00 00:15:08.821 =================================================================================================================== 00:15:08.821 Total : 23731.22 92.70 0.00 0.00 0.00 0.00 0.00 00:15:08.821 00:15:09.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.765 Nvme0n1 : 10.00 23745.70 92.76 0.00 0.00 0.00 0.00 0.00 00:15:09.765 =================================================================================================================== 00:15:09.765 Total : 23745.70 92.76 0.00 0.00 0.00 0.00 0.00 00:15:09.765 00:15:09.765 00:15:09.765 Latency(us) 00:15:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.765 Nvme0n1 : 10.00 23746.85 92.76 0.00 0.00 5387.20 1695.39 10599.74 00:15:09.765 =================================================================================================================== 00:15:09.765 Total : 23746.85 92.76 0.00 0.00 5387.20 1695.39 10599.74 00:15:09.765 0 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 555374 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 555374 ']' 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 555374 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 555374 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 555374' 00:15:09.765 killing process with pid 555374 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 555374 00:15:09.765 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.765 00:15:09.765 Latency(us) 00:15:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.765 =================================================================================================================== 00:15:09.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 555374 00:15:09.765 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.048 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:10.306 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:10.306 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:10.306 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:10.306 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:10.306 11:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:10.565 [2024-07-15 11:25:54.037872] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:10.565 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:10.824 request: 00:15:10.824 { 00:15:10.824 "uuid": "e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7", 00:15:10.824 "method": "bdev_lvol_get_lvstores", 00:15:10.824 "req_id": 1 00:15:10.824 } 00:15:10.824 Got JSON-RPC error response 00:15:10.824 response: 00:15:10.824 { 00:15:10.824 "code": -19, 00:15:10.824 "message": "No such device" 00:15:10.824 } 00:15:10.824 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:10.824 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.824 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.824 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.824 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.824 aio_bdev 00:15:11.081 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 925e0786-6741-48d3-8e0d-57a1e8dadcf0 00:15:11.081 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=925e0786-6741-48d3-8e0d-57a1e8dadcf0 00:15:11.081 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.081 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:11.082 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.082 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.082 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.082 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 925e0786-6741-48d3-8e0d-57a1e8dadcf0 -t 2000 00:15:11.338 [ 00:15:11.338 { 00:15:11.338 "name": "925e0786-6741-48d3-8e0d-57a1e8dadcf0", 00:15:11.338 "aliases": [ 00:15:11.338 "lvs/lvol" 00:15:11.338 ], 00:15:11.338 "product_name": "Logical Volume", 00:15:11.338 "block_size": 4096, 00:15:11.338 "num_blocks": 38912, 00:15:11.338 "uuid": "925e0786-6741-48d3-8e0d-57a1e8dadcf0", 00:15:11.339 "assigned_rate_limits": { 00:15:11.339 "rw_ios_per_sec": 0, 00:15:11.339 "rw_mbytes_per_sec": 0, 00:15:11.339 "r_mbytes_per_sec": 0, 00:15:11.339 "w_mbytes_per_sec": 0 00:15:11.339 }, 00:15:11.339 "claimed": false, 00:15:11.339 "zoned": false, 00:15:11.339 "supported_io_types": { 00:15:11.339 "read": true, 00:15:11.339 "write": true, 00:15:11.339 "unmap": true, 00:15:11.339 "flush": false, 00:15:11.339 "reset": true, 00:15:11.339 "nvme_admin": false, 00:15:11.339 "nvme_io": false, 00:15:11.339 "nvme_io_md": false, 00:15:11.339 "write_zeroes": true, 00:15:11.339 "zcopy": false, 00:15:11.339 "get_zone_info": false, 00:15:11.339 "zone_management": false, 00:15:11.339 "zone_append": false, 00:15:11.339 "compare": false, 00:15:11.339 "compare_and_write": false, 00:15:11.339 "abort": false, 00:15:11.339 "seek_hole": true, 00:15:11.339 "seek_data": true, 00:15:11.339 "copy": false, 00:15:11.339 "nvme_iov_md": false 00:15:11.339 }, 00:15:11.339 "driver_specific": { 00:15:11.339 "lvol": { 00:15:11.339 "lvol_store_uuid": "e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7", 00:15:11.339 "base_bdev": "aio_bdev", 00:15:11.339 "thin_provision": false, 00:15:11.339 "num_allocated_clusters": 38, 00:15:11.339 "snapshot": false, 00:15:11.339 "clone": false, 00:15:11.339 "esnap_clone": false 00:15:11.339 } 00:15:11.339 } 00:15:11.339 } 00:15:11.339 ] 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:11.339 11:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:11.596 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:11.596 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 925e0786-6741-48d3-8e0d-57a1e8dadcf0 00:15:11.854 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0bb7dcd-713c-4681-81b5-0fd4cfdb92d7 00:15:11.854 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.113 00:15:12.113 real 0m15.919s 00:15:12.113 user 0m15.688s 00:15:12.113 sys 0m1.373s 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:12.113 ************************************ 00:15:12.113 END TEST lvs_grow_clean 00:15:12.113 ************************************ 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:12.113 ************************************ 00:15:12.113 START TEST lvs_grow_dirty 00:15:12.113 ************************************ 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:12.113 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.372 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.372 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.372 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:12.372 11:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:12.631 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:12.631 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:12.631 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da lvol 150 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=03613b43-772a-4b9d-83a8-edb379793725 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.890 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:13.149 [2024-07-15 11:25:56.585955] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:13.149 [2024-07-15 11:25:56.586005] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:13.149 true 00:15:13.149 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:13.149 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:13.408 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:13.408 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.408 11:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03613b43-772a-4b9d-83a8-edb379793725 00:15:13.667 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:13.925 [2024-07-15 11:25:57.276022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=557997 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 557997 /var/tmp/bdevperf.sock 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 557997 ']' 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.925 11:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 [2024-07-15 11:25:57.507100] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:13.925 [2024-07-15 11:25:57.507146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557997 ] 00:15:14.183 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.183 [2024-07-15 11:25:57.572486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.183 [2024-07-15 11:25:57.651674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.749 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.749 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:14.749 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:15.314 Nvme0n1 00:15:15.314 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:15.572 [ 00:15:15.572 { 00:15:15.572 "name": "Nvme0n1", 00:15:15.572 "aliases": [ 00:15:15.572 "03613b43-772a-4b9d-83a8-edb379793725" 00:15:15.572 ], 00:15:15.572 "product_name": "NVMe disk", 00:15:15.572 "block_size": 4096, 00:15:15.572 "num_blocks": 38912, 00:15:15.572 "uuid": "03613b43-772a-4b9d-83a8-edb379793725", 00:15:15.572 "assigned_rate_limits": { 00:15:15.572 "rw_ios_per_sec": 0, 00:15:15.572 "rw_mbytes_per_sec": 0, 00:15:15.572 "r_mbytes_per_sec": 0, 00:15:15.572 "w_mbytes_per_sec": 0 00:15:15.572 }, 00:15:15.572 "claimed": false, 00:15:15.572 "zoned": false, 00:15:15.572 "supported_io_types": { 00:15:15.572 "read": true, 00:15:15.572 "write": true, 00:15:15.572 "unmap": true, 00:15:15.572 "flush": true, 00:15:15.572 "reset": true, 00:15:15.572 "nvme_admin": true, 00:15:15.572 "nvme_io": true, 00:15:15.572 "nvme_io_md": false, 00:15:15.572 "write_zeroes": true, 00:15:15.572 "zcopy": false, 00:15:15.572 "get_zone_info": false, 00:15:15.572 "zone_management": false, 00:15:15.572 "zone_append": false, 00:15:15.572 "compare": true, 00:15:15.572 "compare_and_write": true, 00:15:15.572 "abort": true, 00:15:15.572 "seek_hole": false, 00:15:15.572 "seek_data": false, 00:15:15.572 "copy": true, 00:15:15.572 "nvme_iov_md": false 00:15:15.572 }, 00:15:15.572 "memory_domains": [ 00:15:15.572 { 00:15:15.572 "dma_device_id": "system", 00:15:15.572 "dma_device_type": 1 00:15:15.572 } 00:15:15.572 ], 00:15:15.572 "driver_specific": { 00:15:15.572 "nvme": [ 00:15:15.572 { 00:15:15.572 "trid": { 00:15:15.572 "trtype": "TCP", 00:15:15.572 "adrfam": "IPv4", 00:15:15.572 "traddr": "10.0.0.2", 00:15:15.572 "trsvcid": "4420", 00:15:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:15.572 }, 00:15:15.572 "ctrlr_data": { 00:15:15.572 "cntlid": 1, 00:15:15.572 "vendor_id": "0x8086", 00:15:15.572 "model_number": "SPDK bdev Controller", 00:15:15.572 "serial_number": "SPDK0", 00:15:15.572 "firmware_revision": "24.09", 00:15:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.572 "oacs": { 00:15:15.572 "security": 0, 00:15:15.572 "format": 0, 00:15:15.572 "firmware": 0, 00:15:15.572 "ns_manage": 0 00:15:15.572 }, 00:15:15.572 "multi_ctrlr": true, 00:15:15.572 "ana_reporting": false 00:15:15.572 }, 00:15:15.572 "vs": { 00:15:15.572 "nvme_version": "1.3" 00:15:15.572 }, 00:15:15.572 "ns_data": { 00:15:15.572 "id": 1, 00:15:15.572 "can_share": true 00:15:15.572 } 00:15:15.572 } 00:15:15.572 ], 00:15:15.573 "mp_policy": "active_passive" 00:15:15.573 } 00:15:15.573 } 00:15:15.573 ] 00:15:15.573 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=558296 00:15:15.573 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:15.573 11:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.573 Running I/O for 10 seconds... 00:15:16.508 Latency(us) 00:15:16.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.508 Nvme0n1 : 1.00 22205.00 86.74 0.00 0.00 0.00 0.00 0.00 00:15:16.508 =================================================================================================================== 00:15:16.508 Total : 22205.00 86.74 0.00 0.00 0.00 0.00 0.00 00:15:16.508 00:15:17.442 11:26:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:17.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.699 Nvme0n1 : 2.00 22070.50 86.21 0.00 0.00 0.00 0.00 0.00 00:15:17.699 =================================================================================================================== 00:15:17.699 Total : 22070.50 86.21 0.00 0.00 0.00 0.00 0.00 00:15:17.699 00:15:17.699 true 00:15:17.699 11:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:17.699 11:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:17.955 11:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:17.955 11:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:17.955 11:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 558296 00:15:18.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.521 Nvme0n1 : 3.00 22164.33 86.58 0.00 0.00 0.00 0.00 0.00 00:15:18.521 =================================================================================================================== 00:15:18.521 Total : 22164.33 86.58 0.00 0.00 0.00 0.00 0.00 00:15:18.521 00:15:19.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.895 Nvme0n1 : 4.00 22255.25 86.93 0.00 0.00 0.00 0.00 0.00 00:15:19.895 =================================================================================================================== 00:15:19.895 Total : 22255.25 86.93 0.00 0.00 0.00 0.00 0.00 00:15:19.895 00:15:20.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.830 Nvme0n1 : 5.00 22321.00 87.19 0.00 0.00 0.00 0.00 0.00 00:15:20.830 =================================================================================================================== 00:15:20.830 Total : 22321.00 87.19 0.00 0.00 0.00 0.00 0.00 00:15:20.830 00:15:21.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.767 Nvme0n1 : 6.00 22363.50 87.36 0.00 0.00 0.00 0.00 0.00 00:15:21.767 =================================================================================================================== 00:15:21.767 Total : 22363.50 87.36 0.00 0.00 0.00 0.00 0.00 00:15:21.767 00:15:22.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.744 Nvme0n1 : 7.00 22399.57 87.50 0.00 0.00 0.00 0.00 0.00 00:15:22.744 =================================================================================================================== 00:15:22.744 Total : 22399.57 87.50 0.00 0.00 0.00 0.00 0.00 00:15:22.744 00:15:23.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.681 Nvme0n1 : 8.00 22432.62 87.63 0.00 0.00 0.00 0.00 0.00 00:15:23.681 =================================================================================================================== 00:15:23.681 Total : 22432.62 87.63 0.00 0.00 0.00 0.00 0.00 00:15:23.681 00:15:24.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.616 Nvme0n1 : 9.00 22454.78 87.71 0.00 0.00 0.00 0.00 0.00 00:15:24.616 =================================================================================================================== 00:15:24.616 Total : 22454.78 87.71 0.00 0.00 0.00 0.00 0.00 00:15:24.616 00:15:25.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.581 Nvme0n1 : 10.00 22470.90 87.78 0.00 0.00 0.00 0.00 0.00 00:15:25.581 =================================================================================================================== 00:15:25.581 Total : 22470.90 87.78 0.00 0.00 0.00 0.00 0.00 00:15:25.581 00:15:25.581 00:15:25.581 Latency(us) 00:15:25.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.581 Nvme0n1 : 10.01 22471.22 87.78 0.00 0.00 5692.07 1531.55 8662.15 00:15:25.581 =================================================================================================================== 00:15:25.581 Total : 22471.22 87.78 0.00 0.00 5692.07 1531.55 8662.15 00:15:25.581 0 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 557997 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 557997 ']' 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 557997 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 557997 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 557997' 00:15:25.581 killing process with pid 557997 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 557997 00:15:25.581 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.581 00:15:25.581 Latency(us) 00:15:25.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.581 =================================================================================================================== 00:15:25.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.581 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 557997 00:15:25.840 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:26.099 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:26.099 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:26.099 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 554856 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 554856 00:15:26.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 554856 Killed "${NVMF_APP[@]}" "$@" 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=560054 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 560054 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 560054 ']' 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.358 11:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:26.358 [2024-07-15 11:26:09.941630] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:26.358 [2024-07-15 11:26:09.941676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.617 [2024-07-15 11:26:10.014698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.617 [2024-07-15 11:26:10.104167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.617 [2024-07-15 11:26:10.104202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.617 [2024-07-15 11:26:10.104208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.617 [2024-07-15 11:26:10.104214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.617 [2024-07-15 11:26:10.104220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.617 [2024-07-15 11:26:10.104241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.183 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.183 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:27.183 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.183 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.183 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:27.441 [2024-07-15 11:26:10.942299] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:27.441 [2024-07-15 11:26:10.942378] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:27.441 [2024-07-15 11:26:10.942402] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 03613b43-772a-4b9d-83a8-edb379793725 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=03613b43-772a-4b9d-83a8-edb379793725 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:27.441 11:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.699 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03613b43-772a-4b9d-83a8-edb379793725 -t 2000 00:15:27.958 [ 00:15:27.958 { 00:15:27.958 "name": "03613b43-772a-4b9d-83a8-edb379793725", 00:15:27.958 "aliases": [ 00:15:27.958 "lvs/lvol" 00:15:27.958 ], 00:15:27.958 "product_name": "Logical Volume", 00:15:27.958 "block_size": 4096, 00:15:27.958 "num_blocks": 38912, 00:15:27.958 "uuid": "03613b43-772a-4b9d-83a8-edb379793725", 00:15:27.958 "assigned_rate_limits": { 00:15:27.958 "rw_ios_per_sec": 0, 00:15:27.958 "rw_mbytes_per_sec": 0, 00:15:27.958 "r_mbytes_per_sec": 0, 00:15:27.958 "w_mbytes_per_sec": 0 00:15:27.958 }, 00:15:27.958 "claimed": false, 00:15:27.958 "zoned": false, 00:15:27.958 "supported_io_types": { 00:15:27.958 "read": true, 00:15:27.958 "write": true, 00:15:27.958 "unmap": true, 00:15:27.958 "flush": false, 00:15:27.958 "reset": true, 00:15:27.958 "nvme_admin": false, 00:15:27.958 "nvme_io": false, 00:15:27.958 "nvme_io_md": false, 00:15:27.958 "write_zeroes": true, 00:15:27.958 "zcopy": false, 00:15:27.958 "get_zone_info": false, 00:15:27.958 "zone_management": false, 00:15:27.958 "zone_append": false, 00:15:27.958 "compare": false, 00:15:27.958 "compare_and_write": false, 00:15:27.958 "abort": false, 00:15:27.958 "seek_hole": true, 00:15:27.958 "seek_data": true, 00:15:27.958 "copy": false, 00:15:27.958 "nvme_iov_md": false 00:15:27.958 }, 00:15:27.958 "driver_specific": { 00:15:27.958 "lvol": { 00:15:27.958 "lvol_store_uuid": "3e2ecf50-d805-490f-a6a5-c2e9fd3b93da", 00:15:27.958 "base_bdev": "aio_bdev", 00:15:27.958 "thin_provision": false, 00:15:27.958 "num_allocated_clusters": 38, 00:15:27.958 "snapshot": false, 00:15:27.958 "clone": false, 00:15:27.958 "esnap_clone": false 00:15:27.958 } 00:15:27.958 } 00:15:27.958 } 00:15:27.958 ] 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:27.958 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:28.216 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:28.216 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:28.474 [2024-07-15 11:26:11.819047] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:28.474 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:28.474 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:28.474 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:28.475 11:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:28.475 request: 00:15:28.475 { 00:15:28.475 "uuid": "3e2ecf50-d805-490f-a6a5-c2e9fd3b93da", 00:15:28.475 "method": "bdev_lvol_get_lvstores", 00:15:28.475 "req_id": 1 00:15:28.475 } 00:15:28.475 Got JSON-RPC error response 00:15:28.475 response: 00:15:28.475 { 00:15:28.475 "code": -19, 00:15:28.475 "message": "No such device" 00:15:28.475 } 00:15:28.475 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:28.475 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.475 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.475 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.475 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:28.732 aio_bdev 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 03613b43-772a-4b9d-83a8-edb379793725 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=03613b43-772a-4b9d-83a8-edb379793725 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.732 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.733 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:28.991 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03613b43-772a-4b9d-83a8-edb379793725 -t 2000 00:15:28.991 [ 00:15:28.991 { 00:15:28.991 "name": "03613b43-772a-4b9d-83a8-edb379793725", 00:15:28.991 "aliases": [ 00:15:28.991 "lvs/lvol" 00:15:28.991 ], 00:15:28.991 "product_name": "Logical Volume", 00:15:28.991 "block_size": 4096, 00:15:28.991 "num_blocks": 38912, 00:15:28.991 "uuid": "03613b43-772a-4b9d-83a8-edb379793725", 00:15:28.991 "assigned_rate_limits": { 00:15:28.991 "rw_ios_per_sec": 0, 00:15:28.991 "rw_mbytes_per_sec": 0, 00:15:28.991 "r_mbytes_per_sec": 0, 00:15:28.991 "w_mbytes_per_sec": 0 00:15:28.991 }, 00:15:28.991 "claimed": false, 00:15:28.991 "zoned": false, 00:15:28.991 "supported_io_types": { 00:15:28.992 "read": true, 00:15:28.992 "write": true, 00:15:28.992 "unmap": true, 00:15:28.992 "flush": false, 00:15:28.992 "reset": true, 00:15:28.992 "nvme_admin": false, 00:15:28.992 "nvme_io": false, 00:15:28.992 "nvme_io_md": false, 00:15:28.992 "write_zeroes": true, 00:15:28.992 "zcopy": false, 00:15:28.992 "get_zone_info": false, 00:15:28.992 "zone_management": false, 00:15:28.992 "zone_append": false, 00:15:28.992 "compare": false, 00:15:28.992 "compare_and_write": false, 00:15:28.992 "abort": false, 00:15:28.992 "seek_hole": true, 00:15:28.992 "seek_data": true, 00:15:28.992 "copy": false, 00:15:28.992 "nvme_iov_md": false 00:15:28.992 }, 00:15:28.992 "driver_specific": { 00:15:28.992 "lvol": { 00:15:28.992 "lvol_store_uuid": "3e2ecf50-d805-490f-a6a5-c2e9fd3b93da", 00:15:28.992 "base_bdev": "aio_bdev", 00:15:28.992 "thin_provision": false, 00:15:28.992 "num_allocated_clusters": 38, 00:15:28.992 "snapshot": false, 00:15:28.992 "clone": false, 00:15:28.992 "esnap_clone": false 00:15:28.992 } 00:15:28.992 } 00:15:28.992 } 00:15:28.992 ] 00:15:28.992 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:28.992 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:28.992 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:29.250 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:29.250 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:29.250 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:29.508 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:29.508 11:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03613b43-772a-4b9d-83a8-edb379793725 00:15:29.508 11:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e2ecf50-d805-490f-a6a5-c2e9fd3b93da 00:15:29.774 11:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:30.036 00:15:30.036 real 0m17.780s 00:15:30.036 user 0m45.378s 00:15:30.036 sys 0m4.056s 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:30.036 ************************************ 00:15:30.036 END TEST lvs_grow_dirty 00:15:30.036 ************************************ 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:30.036 nvmf_trace.0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.036 rmmod nvme_tcp 00:15:30.036 rmmod nvme_fabrics 00:15:30.036 rmmod nvme_keyring 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 560054 ']' 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 560054 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 560054 ']' 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 560054 00:15:30.036 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 560054 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 560054' 00:15:30.296 killing process with pid 560054 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 560054 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 560054 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.296 11:26:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.828 11:26:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.828 00:15:32.828 real 0m43.212s 00:15:32.828 user 1m6.966s 00:15:32.828 sys 0m10.234s 00:15:32.828 11:26:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.828 11:26:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.828 ************************************ 00:15:32.828 END TEST nvmf_lvs_grow 00:15:32.828 ************************************ 00:15:32.828 11:26:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.828 11:26:15 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:32.828 11:26:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:32.828 11:26:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.828 11:26:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.828 ************************************ 00:15:32.828 START TEST nvmf_bdev_io_wait 00:15:32.828 ************************************ 00:15:32.828 11:26:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:32.828 * Looking for test storage... 00:15:32.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.828 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.829 11:26:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:38.101 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:38.101 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.101 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:38.102 Found net devices under 0000:86:00.0: cvl_0_0 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:38.102 Found net devices under 0000:86:00.1: cvl_0_1 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.102 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:15:38.361 00:15:38.361 --- 10.0.0.2 ping statistics --- 00:15:38.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.361 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:15:38.361 00:15:38.361 --- 10.0.0.1 ping statistics --- 00:15:38.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.361 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=564318 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 564318 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 564318 ']' 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.361 11:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.361 [2024-07-15 11:26:21.936791] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:38.361 [2024-07-15 11:26:21.936837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.620 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.620 [2024-07-15 11:26:22.007568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.620 [2024-07-15 11:26:22.087909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.620 [2024-07-15 11:26:22.087947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.620 [2024-07-15 11:26:22.087955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.620 [2024-07-15 11:26:22.087961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.620 [2024-07-15 11:26:22.087966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.620 [2024-07-15 11:26:22.088027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.620 [2024-07-15 11:26:22.088220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.620 [2024-07-15 11:26:22.088136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.620 [2024-07-15 11:26:22.088221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.187 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 [2024-07-15 11:26:22.846214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 Malloc0 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.445 [2024-07-15 11:26:22.908083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=564424 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=564427 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.445 { 00:15:39.445 "params": { 00:15:39.445 "name": "Nvme$subsystem", 00:15:39.445 "trtype": "$TEST_TRANSPORT", 00:15:39.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.445 "adrfam": "ipv4", 00:15:39.445 "trsvcid": "$NVMF_PORT", 00:15:39.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.445 "hdgst": ${hdgst:-false}, 00:15:39.445 "ddgst": ${ddgst:-false} 00:15:39.445 }, 00:15:39.445 "method": "bdev_nvme_attach_controller" 00:15:39.445 } 00:15:39.445 EOF 00:15:39.445 )") 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=564430 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.445 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.446 { 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme$subsystem", 00:15:39.446 "trtype": "$TEST_TRANSPORT", 00:15:39.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "$NVMF_PORT", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.446 "hdgst": ${hdgst:-false}, 00:15:39.446 "ddgst": ${ddgst:-false} 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 } 00:15:39.446 EOF 00:15:39.446 )") 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=564434 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.446 { 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme$subsystem", 00:15:39.446 "trtype": "$TEST_TRANSPORT", 00:15:39.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "$NVMF_PORT", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.446 "hdgst": ${hdgst:-false}, 00:15:39.446 "ddgst": ${ddgst:-false} 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 } 00:15:39.446 EOF 00:15:39.446 )") 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.446 { 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme$subsystem", 00:15:39.446 "trtype": "$TEST_TRANSPORT", 00:15:39.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "$NVMF_PORT", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.446 "hdgst": ${hdgst:-false}, 00:15:39.446 "ddgst": ${ddgst:-false} 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 } 00:15:39.446 EOF 00:15:39.446 )") 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 564424 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme1", 00:15:39.446 "trtype": "tcp", 00:15:39.446 "traddr": "10.0.0.2", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "4420", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.446 "hdgst": false, 00:15:39.446 "ddgst": false 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 }' 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme1", 00:15:39.446 "trtype": "tcp", 00:15:39.446 "traddr": "10.0.0.2", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "4420", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.446 "hdgst": false, 00:15:39.446 "ddgst": false 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 }' 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme1", 00:15:39.446 "trtype": "tcp", 00:15:39.446 "traddr": "10.0.0.2", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "4420", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.446 "hdgst": false, 00:15:39.446 "ddgst": false 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 }' 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.446 11:26:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.446 "params": { 00:15:39.446 "name": "Nvme1", 00:15:39.446 "trtype": "tcp", 00:15:39.446 "traddr": "10.0.0.2", 00:15:39.446 "adrfam": "ipv4", 00:15:39.446 "trsvcid": "4420", 00:15:39.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.446 "hdgst": false, 00:15:39.446 "ddgst": false 00:15:39.446 }, 00:15:39.446 "method": "bdev_nvme_attach_controller" 00:15:39.446 }' 00:15:39.446 [2024-07-15 11:26:22.955622] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:39.446 [2024-07-15 11:26:22.955671] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:39.446 [2024-07-15 11:26:22.959696] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:39.446 [2024-07-15 11:26:22.959697] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:39.446 [2024-07-15 11:26:22.959743] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 [2024-07-15 11:26:22.959744] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:15:39.446 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:39.446 [2024-07-15 11:26:22.961844] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:39.446 [2024-07-15 11:26:22.961883] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:39.446 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.715 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.715 [2024-07-15 11:26:23.132001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.715 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.715 [2024-07-15 11:26:23.172843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.715 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.715 [2024-07-15 11:26:23.220278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.715 [2024-07-15 11:26:23.249931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.715 [2024-07-15 11:26:23.250738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:39.977 [2024-07-15 11:26:23.321012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:39.977 [2024-07-15 11:26:23.349786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.977 [2024-07-15 11:26:23.440556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:39.977 Running I/O for 1 seconds... 00:15:39.977 Running I/O for 1 seconds... 00:15:40.235 Running I/O for 1 seconds... 00:15:40.235 Running I/O for 1 seconds... 00:15:41.169 00:15:41.169 Latency(us) 00:15:41.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.170 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:41.170 Nvme1n1 : 1.00 15547.59 60.73 0.00 0.00 8211.96 4302.58 16640.45 00:15:41.170 =================================================================================================================== 00:15:41.170 Total : 15547.59 60.73 0.00 0.00 8211.96 4302.58 16640.45 00:15:41.170 00:15:41.170 Latency(us) 00:15:41.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.170 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:41.170 Nvme1n1 : 1.01 6281.43 24.54 0.00 0.00 20254.45 10428.77 33736.79 00:15:41.170 =================================================================================================================== 00:15:41.170 Total : 6281.43 24.54 0.00 0.00 20254.45 10428.77 33736.79 00:15:41.170 00:15:41.170 Latency(us) 00:15:41.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.170 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:41.170 Nvme1n1 : 1.00 244810.06 956.29 0.00 0.00 520.88 215.49 666.05 00:15:41.170 =================================================================================================================== 00:15:41.170 Total : 244810.06 956.29 0.00 0.00 520.88 215.49 666.05 00:15:41.170 00:15:41.170 Latency(us) 00:15:41.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.170 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:41.170 Nvme1n1 : 1.01 6695.05 26.15 0.00 0.00 19064.50 5043.42 45134.36 00:15:41.170 =================================================================================================================== 00:15:41.170 Total : 6695.05 26.15 0.00 0.00 19064.50 5043.42 45134.36 00:15:41.428 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 564427 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 564430 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 564434 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.429 rmmod nvme_tcp 00:15:41.429 rmmod nvme_fabrics 00:15:41.429 rmmod nvme_keyring 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 564318 ']' 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 564318 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 564318 ']' 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 564318 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.429 11:26:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 564318 00:15:41.429 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.429 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.429 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 564318' 00:15:41.429 killing process with pid 564318 00:15:41.429 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 564318 00:15:41.429 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 564318 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.688 11:26:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.254 11:26:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:44.254 00:15:44.254 real 0m11.278s 00:15:44.254 user 0m19.430s 00:15:44.254 sys 0m6.106s 00:15:44.254 11:26:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.254 11:26:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.254 ************************************ 00:15:44.254 END TEST nvmf_bdev_io_wait 00:15:44.254 ************************************ 00:15:44.254 11:26:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:44.254 11:26:27 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:44.254 11:26:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:44.254 11:26:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.254 11:26:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:44.254 ************************************ 00:15:44.254 START TEST nvmf_queue_depth 00:15:44.254 ************************************ 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:44.254 * Looking for test storage... 00:15:44.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:44.254 11:26:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:49.528 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.528 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:49.529 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:49.529 Found net devices under 0000:86:00.0: cvl_0_0 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:49.529 Found net devices under 0000:86:00.1: cvl_0_1 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.529 11:26:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.529 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.529 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.529 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.529 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:15:49.801 00:15:49.801 --- 10.0.0.2 ping statistics --- 00:15:49.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.801 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:15:49.801 00:15:49.801 --- 10.0.0.1 ping statistics --- 00:15:49.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.801 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.801 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=568350 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 568350 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 568350 ']' 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.802 11:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.802 [2024-07-15 11:26:33.277405] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:49.802 [2024-07-15 11:26:33.277453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.802 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.802 [2024-07-15 11:26:33.348998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.070 [2024-07-15 11:26:33.429496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.070 [2024-07-15 11:26:33.429527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.070 [2024-07-15 11:26:33.429534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.070 [2024-07-15 11:26:33.429540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.070 [2024-07-15 11:26:33.429545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.070 [2024-07-15 11:26:33.429562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.638 [2024-07-15 11:26:34.120566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.638 Malloc0 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.638 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.639 [2024-07-15 11:26:34.179638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=568391 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 568391 /var/tmp/bdevperf.sock 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 568391 ']' 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.639 11:26:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.898 [2024-07-15 11:26:34.230506] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:50.898 [2024-07-15 11:26:34.230552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568391 ] 00:15:50.898 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.898 [2024-07-15 11:26:34.298587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.898 [2024-07-15 11:26:34.378435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.465 11:26:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.465 11:26:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:51.465 11:26:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.466 11:26:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.466 11:26:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.724 NVMe0n1 00:15:51.724 11:26:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.724 11:26:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.724 Running I/O for 10 seconds... 00:16:01.709 00:16:01.709 Latency(us) 00:16:01.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.709 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:01.709 Verification LBA range: start 0x0 length 0x4000 00:16:01.709 NVMe0n1 : 10.05 12139.36 47.42 0.00 0.00 84052.44 11511.54 57215.78 00:16:01.709 =================================================================================================================== 00:16:01.709 Total : 12139.36 47.42 0.00 0.00 84052.44 11511.54 57215.78 00:16:01.709 0 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 568391 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 568391 ']' 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 568391 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.968 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 568391 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 568391' 00:16:01.969 killing process with pid 568391 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 568391 00:16:01.969 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.969 00:16:01.969 Latency(us) 00:16:01.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.969 =================================================================================================================== 00:16:01.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 568391 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.969 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.969 rmmod nvme_tcp 00:16:02.228 rmmod nvme_fabrics 00:16:02.228 rmmod nvme_keyring 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 568350 ']' 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 568350 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 568350 ']' 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 568350 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 568350 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 568350' 00:16:02.228 killing process with pid 568350 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 568350 00:16:02.228 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 568350 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.488 11:26:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.395 11:26:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.395 00:16:04.395 real 0m20.577s 00:16:04.395 user 0m24.777s 00:16:04.395 sys 0m6.013s 00:16:04.395 11:26:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.395 11:26:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:04.395 ************************************ 00:16:04.395 END TEST nvmf_queue_depth 00:16:04.395 ************************************ 00:16:04.395 11:26:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.395 11:26:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.395 11:26:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.395 11:26:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.395 11:26:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.395 ************************************ 00:16:04.395 START TEST nvmf_target_multipath 00:16:04.395 ************************************ 00:16:04.395 11:26:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.654 * Looking for test storage... 00:16:04.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.654 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.655 11:26:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.226 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:11.227 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:11.227 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:11.227 Found net devices under 0000:86:00.0: cvl_0_0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:11.227 Found net devices under 0000:86:00.1: cvl_0_1 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:16:11.227 00:16:11.227 --- 10.0.0.2 ping statistics --- 00:16:11.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.227 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:16:11.227 00:16:11.227 --- 10.0.0.1 ping statistics --- 00:16:11.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.227 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:11.227 only one NIC for nvmf test 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.227 rmmod nvme_tcp 00:16:11.227 rmmod nvme_fabrics 00:16:11.227 rmmod nvme_keyring 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.227 11:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.660 11:26:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.660 11:26:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:12.660 11:26:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:12.660 11:26:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.660 11:26:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.660 00:16:12.660 real 0m8.047s 00:16:12.660 user 0m1.659s 00:16:12.660 sys 0m4.378s 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.660 11:26:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:12.660 ************************************ 00:16:12.660 END TEST nvmf_target_multipath 00:16:12.660 ************************************ 00:16:12.660 11:26:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.660 11:26:56 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.660 11:26:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.660 11:26:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.660 11:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.660 ************************************ 00:16:12.660 START TEST nvmf_zcopy 00:16:12.660 ************************************ 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.660 * Looking for test storage... 00:16:12.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.660 11:26:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.227 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.228 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.228 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:16:19.228 00:16:19.228 --- 10.0.0.2 ping statistics --- 00:16:19.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.228 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:19.228 00:16:19.228 --- 10.0.0.1 ping statistics --- 00:16:19.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.228 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=577367 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 577367 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 577367 ']' 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.228 11:27:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.228 [2024-07-15 11:27:02.044053] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:19.228 [2024-07-15 11:27:02.044098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.228 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.228 [2024-07-15 11:27:02.113318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.228 [2024-07-15 11:27:02.191469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.228 [2024-07-15 11:27:02.191504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.228 [2024-07-15 11:27:02.191511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.228 [2024-07-15 11:27:02.191517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.228 [2024-07-15 11:27:02.191522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.228 [2024-07-15 11:27:02.191545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.487 [2024-07-15 11:27:02.895171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.487 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 [2024-07-15 11:27:02.915307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 malloc0 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.488 { 00:16:19.488 "params": { 00:16:19.488 "name": "Nvme$subsystem", 00:16:19.488 "trtype": "$TEST_TRANSPORT", 00:16:19.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.488 "adrfam": "ipv4", 00:16:19.488 "trsvcid": "$NVMF_PORT", 00:16:19.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.488 "hdgst": ${hdgst:-false}, 00:16:19.488 "ddgst": ${ddgst:-false} 00:16:19.488 }, 00:16:19.488 "method": "bdev_nvme_attach_controller" 00:16:19.488 } 00:16:19.488 EOF 00:16:19.488 )") 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:19.488 11:27:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.488 "params": { 00:16:19.488 "name": "Nvme1", 00:16:19.488 "trtype": "tcp", 00:16:19.488 "traddr": "10.0.0.2", 00:16:19.488 "adrfam": "ipv4", 00:16:19.488 "trsvcid": "4420", 00:16:19.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.488 "hdgst": false, 00:16:19.488 "ddgst": false 00:16:19.488 }, 00:16:19.488 "method": "bdev_nvme_attach_controller" 00:16:19.488 }' 00:16:19.488 [2024-07-15 11:27:02.993957] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:19.488 [2024-07-15 11:27:02.994000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577621 ] 00:16:19.488 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.488 [2024-07-15 11:27:03.062762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.758 [2024-07-15 11:27:03.142501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.016 Running I/O for 10 seconds... 00:16:29.998 00:16:29.998 Latency(us) 00:16:29.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:29.998 Verification LBA range: start 0x0 length 0x1000 00:16:29.998 Nvme1n1 : 10.01 8641.21 67.51 0.00 0.00 14770.06 363.30 26442.35 00:16:29.998 =================================================================================================================== 00:16:29.998 Total : 8641.21 67.51 0.00 0.00 14770.06 363.30 26442.35 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=579625 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.257 { 00:16:30.257 "params": { 00:16:30.257 "name": "Nvme$subsystem", 00:16:30.257 "trtype": "$TEST_TRANSPORT", 00:16:30.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.257 "adrfam": "ipv4", 00:16:30.257 "trsvcid": "$NVMF_PORT", 00:16:30.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.257 "hdgst": ${hdgst:-false}, 00:16:30.257 "ddgst": ${ddgst:-false} 00:16:30.257 }, 00:16:30.257 "method": "bdev_nvme_attach_controller" 00:16:30.257 } 00:16:30.257 EOF 00:16:30.257 )") 00:16:30.257 [2024-07-15 11:27:13.683576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.683615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:30.257 11:27:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.257 "params": { 00:16:30.257 "name": "Nvme1", 00:16:30.257 "trtype": "tcp", 00:16:30.257 "traddr": "10.0.0.2", 00:16:30.257 "adrfam": "ipv4", 00:16:30.257 "trsvcid": "4420", 00:16:30.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.257 "hdgst": false, 00:16:30.257 "ddgst": false 00:16:30.257 }, 00:16:30.257 "method": "bdev_nvme_attach_controller" 00:16:30.257 }' 00:16:30.257 [2024-07-15 11:27:13.695568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.695583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.707598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.707608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.719632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.719643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.723236] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:30.257 [2024-07-15 11:27:13.723295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579625 ] 00:16:30.257 [2024-07-15 11:27:13.731662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.731673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.743694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.743704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.257 [2024-07-15 11:27:13.755727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.755737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.767762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.767773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.779793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.779806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.791825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.257 [2024-07-15 11:27:13.791839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.257 [2024-07-15 11:27:13.791914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.258 [2024-07-15 11:27:13.803860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.258 [2024-07-15 11:27:13.803873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.258 [2024-07-15 11:27:13.815889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.258 [2024-07-15 11:27:13.815907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.258 [2024-07-15 11:27:13.827924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.258 [2024-07-15 11:27:13.827937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.258 [2024-07-15 11:27:13.839967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.258 [2024-07-15 11:27:13.839990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.851991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.852003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.864024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.864035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.868136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.517 [2024-07-15 11:27:13.876057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.876070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.888099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.888119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.900123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.900137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.912155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.912166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.924188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.924200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.936214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.936231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.948251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.948262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.960320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.960342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.972331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.972348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.984359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.984375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:13.996386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:13.996397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.008443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.008454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.020457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.020470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.032503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.032517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.044532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.044544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.056560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.056571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.068593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.068603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.080635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.080657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.092664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.092674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.517 [2024-07-15 11:27:14.104696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.517 [2024-07-15 11:27:14.104707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.776 [2024-07-15 11:27:14.116729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.776 [2024-07-15 11:27:14.116740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.776 [2024-07-15 11:27:14.128770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.128785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.140797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.140808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.152830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.152841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.160847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.160857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.172884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.172897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.218600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.218618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.229036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.229049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 Running I/O for 5 seconds... 00:16:30.777 [2024-07-15 11:27:14.237056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.237066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.248165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.248185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.262480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.262500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.270052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.270071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.279664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.279683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.288749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.288768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.298081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.298100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.307343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.307362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.316626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.316645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.325217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.325240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.334566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.334584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.343802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.343821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.777 [2024-07-15 11:27:14.358558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.777 [2024-07-15 11:27:14.358577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.372493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.372512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.381200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.381218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.390008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.390026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.399414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.399433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.408208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.408233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.417580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.417598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.426024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.426044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.435301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.435319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.443803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.443821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.458742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.458761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.470115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.470134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.478882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.478901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.488053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.488072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.496750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.496768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.511206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.511230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.519945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.519963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.528746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.528765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.537902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.537920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.547054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.547072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.561668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.561687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.570603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.570622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.579388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.579407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.588630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.588648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.598046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.598064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.612661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.612679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.037 [2024-07-15 11:27:14.620157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.037 [2024-07-15 11:27:14.620175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.629252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.629271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.638436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.638456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.647758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.647779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.662336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.662356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.671352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.671372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.296 [2024-07-15 11:27:14.680595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.296 [2024-07-15 11:27:14.680613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.689029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.689047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.697655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.697677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.711846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.711864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.720757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.720776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.730274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.730293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.738902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.738920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.748533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.748555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.762684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.762704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.776350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.776371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.785253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.785272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.794082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.794101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.803400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.803418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.812644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.812663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.821744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.821762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.830767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.830785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.839995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.840014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.849220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.849252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.863547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.863567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.872464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.872483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.297 [2024-07-15 11:27:14.881147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.297 [2024-07-15 11:27:14.881166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.890255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.890273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.899213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.899239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.908412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.908429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.916924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.916943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.925553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.925571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.934578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.934597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.943143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.943162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.957763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.957782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.965164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.965183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.974012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.974030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.982849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.982867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:14.992485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:14.992504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.006728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.006746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.015597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.015616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.024848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.024866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.033952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.033971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.043764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.043782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.052974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.052992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.061937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.061959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.071701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.071719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.080369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.080387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.089862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.089881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.099023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.099042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.108112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.556 [2024-07-15 11:27:15.108130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.556 [2024-07-15 11:27:15.117396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.557 [2024-07-15 11:27:15.117414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.557 [2024-07-15 11:27:15.126297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.557 [2024-07-15 11:27:15.126316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.557 [2024-07-15 11:27:15.135390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.557 [2024-07-15 11:27:15.135408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.149775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.149794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.158975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.158993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.167603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.167622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.177418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.177437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.186105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.186123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.200856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.200875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.209726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.209746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.218428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.218448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.227552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.227571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.236662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.236682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.251110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.251135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.260197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.260216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.268891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.268910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.278018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.278037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.287319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.287337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.301938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.816 [2024-07-15 11:27:15.301957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.816 [2024-07-15 11:27:15.310810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.310828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.319892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.319910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.329055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.329075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.338329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.338348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.347842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.347860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.356540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.356558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.365530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.365549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.374575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.374594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.383583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.383602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.392726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.392744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.817 [2024-07-15 11:27:15.402053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.817 [2024-07-15 11:27:15.402071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.411861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.411890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.421091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.421111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.430250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.430272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.444625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.444644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.453546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.453565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.462360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.462379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.471723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.471742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.480906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.480925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.495566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.495585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.503323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.503341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.512791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.512810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.521775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.521794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.531102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.531121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.545812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.545832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.553551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.553570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.561389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.561407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.574925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.574944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.583694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.583713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.593015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.593033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.602553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.602571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.611836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.611854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.620522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.620544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.629810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.629828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.644381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.644399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.653557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.653576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.076 [2024-07-15 11:27:15.663022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.076 [2024-07-15 11:27:15.663043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.671705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.671724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.680221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.680245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.694555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.694573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.703433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.703451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.712233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.712267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.721400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.721418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.730778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.730796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.739966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.739984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.748526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.748544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.757635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.757654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.766742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.766760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.776344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.776372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.790842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.790863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.798389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.798408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.807443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.807462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.816061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.816079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.824611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.824629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.838816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.838834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.846278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.846297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.856575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.856593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.865294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.865312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.875178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.875196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.889459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.889477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.898317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.898335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.907046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.907070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.916382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.916401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.336 [2024-07-15 11:27:15.923386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.336 [2024-07-15 11:27:15.923404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.939001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.939021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.947760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.947780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.956337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.956356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.965252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.965271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.974919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.974938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.989192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.989210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:15.998430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:15.998448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:16.007316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:16.007335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:16.014188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:16.014205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:16.024528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:16.024546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:16.033900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:16.033918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.595 [2024-07-15 11:27:16.043341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.595 [2024-07-15 11:27:16.043360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.052501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.052520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.061679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.061697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.071045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.071063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.085595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.085614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.094789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.094808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.103185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.103203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.112472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.112490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.119400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.119418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.134886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.134904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.143887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.143905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.152764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.152783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.161322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.161340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.169963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.169982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.596 [2024-07-15 11:27:16.179318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.596 [2024-07-15 11:27:16.179336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.189053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.189071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.203411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.203430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.212211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.212234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.220858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.220877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.230301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.230320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.239190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.239209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.248345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.248364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.257590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.257609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.267287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.267306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.275915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.275934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.284489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.284507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.293585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.293603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.302931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.302950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.312179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.312198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.321411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.321429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.330671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.330690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.855 [2024-07-15 11:27:16.339056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.855 [2024-07-15 11:27:16.339074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.347927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.347946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.357199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.357217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.365909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.365927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.374550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.374569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.383748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.383766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.393448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.393466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.402176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.402194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.416521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.416539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.424048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.424067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.433372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.433390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.856 [2024-07-15 11:27:16.442289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.856 [2024-07-15 11:27:16.442308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.451828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.451848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.466647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.466667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.474322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.474341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.483601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.483620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.492441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.492459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.501741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.501759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.510952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.510970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.520047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.520065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.529186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.529209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.538488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.538507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.547615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.547633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.561883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.561901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.570661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.570679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.579322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.579341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.588392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.588411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.597691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.597710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.612328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.612348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.621168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.115 [2024-07-15 11:27:16.621187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.115 [2024-07-15 11:27:16.630643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.630662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.639747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.639766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.648179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.648198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.657805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.657825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.666607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.666626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.675806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.675825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.684799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.684819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.116 [2024-07-15 11:27:16.694501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.116 [2024-07-15 11:27:16.694519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.709354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.709374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.720010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.720032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.729165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.729185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.737811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.737829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.746567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.746586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.755275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.755293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.764998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.765016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.773574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.773593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.783185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.783204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.791780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.791798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.806249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.806268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.815156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.815174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.823985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.824004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.833123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.833143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.842411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.842431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.851852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.851871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.861086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.861104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.869865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.869884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.879779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.879797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.889012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.889030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.903252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.903291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.912409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.912428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.921251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.921270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.930615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.930634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.939692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.939711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.953920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.953939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.375 [2024-07-15 11:27:16.962782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.375 [2024-07-15 11:27:16.962800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.634 [2024-07-15 11:27:16.971637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.634 [2024-07-15 11:27:16.971657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.634 [2024-07-15 11:27:16.980868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.634 [2024-07-15 11:27:16.980889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.634 [2024-07-15 11:27:16.990503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.634 [2024-07-15 11:27:16.990521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.634 [2024-07-15 11:27:17.004758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.634 [2024-07-15 11:27:17.004778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.634 [2024-07-15 11:27:17.013664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.634 [2024-07-15 11:27:17.013682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.022329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.022348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.031565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.031583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.040244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.040262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.054477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.054497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.063092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.063110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.072306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.072324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.081897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.081916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.090380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.090401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.104860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.104879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.113668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.113686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.122957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.122975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.132122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.132141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.140681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.140699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.155170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.155188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.164949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.164967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.173808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.173826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.182993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.183012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.192071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.192089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.206434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.206452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.215292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.215311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.635 [2024-07-15 11:27:17.224197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.635 [2024-07-15 11:27:17.224215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.233413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.233431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.242570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.242588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.252413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.252431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.261125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.261143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.270366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.270384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.279601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.279619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.289079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.289097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.303649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.303668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.894 [2024-07-15 11:27:17.311009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.894 [2024-07-15 11:27:17.311028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.320381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.320399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.329537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.329556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.338113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.338131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.352766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.352785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.360320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.360339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.369074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.369092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.378445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.378464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.387328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.387347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.397130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.397148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.405794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.405813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.415179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.415198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.423936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.423954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.433161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.433181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.442523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.442541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.452270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.452288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.460860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.460879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.469523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.469542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.895 [2024-07-15 11:27:17.478517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.895 [2024-07-15 11:27:17.478536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.493521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.493540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.509060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.509078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.518016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.518035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.526737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.526756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.536034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.536053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.550450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.550469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.559429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.559447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.569484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.569502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.578064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.578083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.587355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.587375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.601897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.601920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.610729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.610747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.619483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.619501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.628927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.628946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.638277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.638296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.652920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.652939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.661682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.661700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.670438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.670457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.678938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.678957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.687593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.687612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.701834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.701856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.709454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.709473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.718644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.718662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.727204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.727223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.154 [2024-07-15 11:27:17.736459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.154 [2024-07-15 11:27:17.736478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.745810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.745828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.755064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.755083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.763665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.763683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.772714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.772732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.781977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.781996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.796429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.796449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.804000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.804017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.817787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.817805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.826652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.826671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.835999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.836018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.845368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.845387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.854082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.854100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.863285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.863303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.872524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.872543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.881112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.881130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.890561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.890580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.899707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.899725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.909014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.909033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.413 [2024-07-15 11:27:17.918494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.413 [2024-07-15 11:27:17.918512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.927730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.927748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.941939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.941957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.950879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.950897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.960098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.960116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.968731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.968749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.978024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.978043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:17.992621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:17.992640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.414 [2024-07-15 11:27:18.001801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.414 [2024-07-15 11:27:18.001820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.010670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.010689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.019310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.019332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.028421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.028440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.043149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.043171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.050882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.050902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.058243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.058262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.067645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.067664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.076237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.076256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.090745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.090764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.104859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.104878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.113856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.113875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.122636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.122656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.132305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.132324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.147381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.147400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.154891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.154910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.163514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.163532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.173345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.173363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.181931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.181949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.196303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.196321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.204005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.204024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.213070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.213093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.222449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.222467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.231071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.231090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.253512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.253533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.673 [2024-07-15 11:27:18.262275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.673 [2024-07-15 11:27:18.262295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.270865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.270885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.279365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.279384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.293647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.293666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.302545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.302565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.311435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.311454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.320596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.320614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.329169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.329188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.343687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.343706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.351124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.351144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.361341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.361360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.370103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.370121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.379303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.379322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.388490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.388508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.397427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.397445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.407221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.407251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.416016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.416035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.425082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.425100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.439681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.439700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.450437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.450457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.459198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.459216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.468165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.468184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.477443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.477462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.491764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.491783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.500558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.500576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.509720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.509739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.932 [2024-07-15 11:27:18.518846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.932 [2024-07-15 11:27:18.518864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.527977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.527996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.542426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.542445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.551264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.551282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.559954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.559971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.568850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.568867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.578025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.578043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.592587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.592605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.601611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.601633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.610076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.610094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.619360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.619379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.628376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.628395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.642674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.642693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.656289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.656308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.664973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.664992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.674004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.674023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.683178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.683196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.697317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.697335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.706034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.706052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.715164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.715184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.724288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.724309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.733241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.733260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.747650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.747669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.756517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.756536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.765131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.765149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.191 [2024-07-15 11:27:18.774825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.191 [2024-07-15 11:27:18.774842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.783990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.784008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.798560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.798580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.807417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.807436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.816707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.816725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.825848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.825866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.834448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.834467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.848572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.848591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.857744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.857762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.867138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.867156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.876401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.876420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.885602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.885620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.899610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.899629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.907476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.907495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.916252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.916271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.924882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.924900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.934241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.934276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.943504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.943522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.952796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.952815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.961545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.961563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.970766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.970784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.980070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.980089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:18.994538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:18.994557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:19.003516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:19.003535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:19.012427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:19.012445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:19.021675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:19.021693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.450 [2024-07-15 11:27:19.030202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.450 [2024-07-15 11:27:19.030220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.044937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.044956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.052625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.052644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.060122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.060139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.069898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.069916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.078786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.078805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.088103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.088121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.097302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.097321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.106566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.106585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.115787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.115805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.122907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.122925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.138579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.138598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.147408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.147427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.156756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.156775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.166073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.166091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.175442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.175459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.185283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.185303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.194053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.194072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.203249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.203269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.212471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.212489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.221221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.221245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.235840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.235860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.243500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.243519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.250454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.250474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 00:16:35.709 Latency(us) 00:16:35.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.709 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:35.709 Nvme1n1 : 5.01 16821.72 131.42 0.00 0.00 7601.79 3362.28 15386.71 00:16:35.709 =================================================================================================================== 00:16:35.709 Total : 16821.72 131.42 0.00 0.00 7601.79 3362.28 15386.71 00:16:35.709 [2024-07-15 11:27:19.257303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.257319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.265322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.265336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.277367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.277386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.709 [2024-07-15 11:27:19.289400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.709 [2024-07-15 11:27:19.289418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.301428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.301443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.313457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.313479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.325499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.325515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.337528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.337545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.349571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.349587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.361601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.361612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.373629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.373639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.381654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.381669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.393688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.393699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.405717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.405727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.968 [2024-07-15 11:27:19.417752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.968 [2024-07-15 11:27:19.417765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.969 [2024-07-15 11:27:19.425771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.969 [2024-07-15 11:27:19.425781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.969 [2024-07-15 11:27:19.433792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.969 [2024-07-15 11:27:19.433804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (579625) - No such process 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 579625 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.969 delay0 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.969 11:27:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:35.969 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.296 [2024-07-15 11:27:19.610655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:42.909 Initializing NVMe Controllers 00:16:42.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.909 Initialization complete. Launching workers. 00:16:42.909 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:16:42.909 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 48 00:16:42.909 success 198, unsuccess 178, failed 0 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.909 rmmod nvme_tcp 00:16:42.909 rmmod nvme_fabrics 00:16:42.909 rmmod nvme_keyring 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 577367 ']' 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 577367 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 577367 ']' 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 577367 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 577367 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 577367' 00:16:42.909 killing process with pid 577367 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 577367 00:16:42.909 11:27:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 577367 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.909 11:27:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.815 11:27:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.815 00:16:44.815 real 0m31.986s 00:16:44.815 user 0m43.522s 00:16:44.815 sys 0m10.708s 00:16:44.815 11:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.815 11:27:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:44.815 ************************************ 00:16:44.815 END TEST nvmf_zcopy 00:16:44.815 ************************************ 00:16:44.815 11:27:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.815 11:27:28 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.815 11:27:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.815 11:27:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.815 11:27:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.815 ************************************ 00:16:44.815 START TEST nvmf_nmic 00:16:44.815 ************************************ 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.815 * Looking for test storage... 00:16:44.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.815 11:27:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.381 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:51.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:51.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:51.382 Found net devices under 0000:86:00.0: cvl_0_0 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:51.382 Found net devices under 0000:86:00.1: cvl_0_1 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.382 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:16:51.383 00:16:51.383 --- 10.0.0.2 ping statistics --- 00:16:51.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.383 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:16:51.383 00:16:51.383 --- 10.0.0.1 ping statistics --- 00:16:51.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.383 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.383 11:27:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=585195 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 585195 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 585195 ']' 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 [2024-07-15 11:27:34.073374] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:51.383 [2024-07-15 11:27:34.073416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.383 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.383 [2024-07-15 11:27:34.145654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.383 [2024-07-15 11:27:34.230546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.383 [2024-07-15 11:27:34.230584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.383 [2024-07-15 11:27:34.230591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.383 [2024-07-15 11:27:34.230597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.383 [2024-07-15 11:27:34.230602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.383 [2024-07-15 11:27:34.230656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.383 [2024-07-15 11:27:34.230766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.383 [2024-07-15 11:27:34.230781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.383 [2024-07-15 11:27:34.230787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 [2024-07-15 11:27:34.925043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 Malloc0 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.383 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.641 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.641 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.641 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.642 [2024-07-15 11:27:34.977175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:51.642 test case1: single bdev can't be used in multiple subsystems 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.642 11:27:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.642 [2024-07-15 11:27:35.001086] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:51.642 [2024-07-15 11:27:35.001107] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:51.642 [2024-07-15 11:27:35.001115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:51.642 request: 00:16:51.642 { 00:16:51.642 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.642 "namespace": { 00:16:51.642 "bdev_name": "Malloc0", 00:16:51.642 "no_auto_visible": false 00:16:51.642 }, 00:16:51.642 "method": "nvmf_subsystem_add_ns", 00:16:51.642 "req_id": 1 00:16:51.642 } 00:16:51.642 Got JSON-RPC error response 00:16:51.642 response: 00:16:51.642 { 00:16:51.642 "code": -32602, 00:16:51.642 "message": "Invalid parameters" 00:16:51.642 } 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:51.642 Adding namespace failed - expected result. 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:51.642 test case2: host connect to nvmf target in multiple paths 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.642 [2024-07-15 11:27:35.013214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.642 11:27:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.577 11:27:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:53.949 11:27:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.949 11:27:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:53.949 11:27:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.949 11:27:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:53.949 11:27:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:55.845 11:27:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:55.845 [global] 00:16:55.845 thread=1 00:16:55.845 invalidate=1 00:16:55.845 rw=write 00:16:55.845 time_based=1 00:16:55.845 runtime=1 00:16:55.845 ioengine=libaio 00:16:55.845 direct=1 00:16:55.845 bs=4096 00:16:55.845 iodepth=1 00:16:55.845 norandommap=0 00:16:55.845 numjobs=1 00:16:55.845 00:16:55.845 verify_dump=1 00:16:55.845 verify_backlog=512 00:16:55.845 verify_state_save=0 00:16:55.845 do_verify=1 00:16:55.845 verify=crc32c-intel 00:16:55.845 [job0] 00:16:55.845 filename=/dev/nvme0n1 00:16:55.845 Could not set queue depth (nvme0n1) 00:16:56.103 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.103 fio-3.35 00:16:56.103 Starting 1 thread 00:16:57.475 00:16:57.475 job0: (groupid=0, jobs=1): err= 0: pid=586205: Mon Jul 15 11:27:40 2024 00:16:57.475 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:16:57.475 slat (nsec): min=9902, max=27555, avg=21013.17, stdev=2860.36 00:16:57.475 clat (usec): min=40679, max=43971, avg=41081.38, stdev=635.81 00:16:57.475 lat (usec): min=40689, max=43999, avg=41102.39, stdev=637.41 00:16:57.475 clat percentiles (usec): 00:16:57.475 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:16:57.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:57.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:57.475 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:16:57.475 | 99.99th=[43779] 00:16:57.475 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:16:57.475 slat (nsec): min=10050, max=39247, avg=11238.31, stdev=1802.15 00:16:57.475 clat (usec): min=143, max=327, avg=165.26, stdev=22.62 00:16:57.475 lat (usec): min=154, max=366, avg=176.50, stdev=23.04 00:16:57.475 clat percentiles (usec): 00:16:57.475 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:16:57.475 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 161], 00:16:57.475 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 241], 00:16:57.475 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 326], 99.95th=[ 326], 00:16:57.475 | 99.99th=[ 326] 00:16:57.475 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.475 lat (usec) : 250=95.33%, 500=0.37% 00:16:57.475 lat (msec) : 50=4.30% 00:16:57.475 cpu : usr=0.39%, sys=0.87%, ctx=535, majf=0, minf=2 00:16:57.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.475 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.475 00:16:57.475 Run status group 0 (all jobs): 00:16:57.475 READ: bw=88.6KiB/s (90.8kB/s), 88.6KiB/s-88.6KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1038-1038msec 00:16:57.475 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:16:57.475 00:16:57.475 Disk stats (read/write): 00:16:57.475 nvme0n1: ios=69/512, merge=0/0, ticks=815/78, in_queue=893, util=91.48% 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.475 11:27:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.475 rmmod nvme_tcp 00:16:57.475 rmmod nvme_fabrics 00:16:57.475 rmmod nvme_keyring 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 585195 ']' 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 585195 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 585195 ']' 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 585195 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.475 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 585195 00:16:57.734 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.734 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.734 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 585195' 00:16:57.735 killing process with pid 585195 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 585195 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 585195 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.735 11:27:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.273 11:27:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.273 00:17:00.273 real 0m15.188s 00:17:00.273 user 0m35.188s 00:17:00.273 sys 0m5.070s 00:17:00.273 11:27:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.273 11:27:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:00.273 ************************************ 00:17:00.273 END TEST nvmf_nmic 00:17:00.273 ************************************ 00:17:00.273 11:27:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.273 11:27:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:00.273 11:27:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.273 11:27:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.273 11:27:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.273 ************************************ 00:17:00.273 START TEST nvmf_fio_target 00:17:00.273 ************************************ 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:00.273 * Looking for test storage... 00:17:00.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.273 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.274 11:27:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:05.593 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:05.593 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:05.593 Found net devices under 0000:86:00.0: cvl_0_0 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:05.593 Found net devices under 0000:86:00.1: cvl_0_1 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.593 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.594 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.853 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.853 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.853 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:17:05.853 00:17:05.853 --- 10.0.0.2 ping statistics --- 00:17:05.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.853 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:05.853 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:17:05.854 00:17:05.854 --- 10.0.0.1 ping statistics --- 00:17:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.854 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=589817 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 589817 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 589817 ']' 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.854 11:27:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.854 [2024-07-15 11:27:49.369559] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:05.854 [2024-07-15 11:27:49.369610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.854 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.854 [2024-07-15 11:27:49.423002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.113 [2024-07-15 11:27:49.500541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.113 [2024-07-15 11:27:49.500583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.113 [2024-07-15 11:27:49.500591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.113 [2024-07-15 11:27:49.500597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.113 [2024-07-15 11:27:49.500602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.113 [2024-07-15 11:27:49.500662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.113 [2024-07-15 11:27:49.500769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.113 [2024-07-15 11:27:49.500878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.113 [2024-07-15 11:27:49.500880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.680 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:06.939 [2024-07-15 11:27:50.376864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.939 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.199 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:07.199 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.458 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:07.458 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.458 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:07.458 11:27:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.716 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:07.716 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:07.976 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.236 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:08.236 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.236 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:08.236 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.495 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:08.495 11:27:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:08.754 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:08.754 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:08.754 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.012 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.012 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.271 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.271 [2024-07-15 11:27:52.850599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.530 11:27:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:09.530 11:27:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:09.790 11:27:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.168 11:27:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:11.168 11:27:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:11.168 11:27:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.168 11:27:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:11.168 11:27:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:11.169 11:27:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:13.080 11:27:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:13.080 [global] 00:17:13.080 thread=1 00:17:13.080 invalidate=1 00:17:13.080 rw=write 00:17:13.080 time_based=1 00:17:13.080 runtime=1 00:17:13.080 ioengine=libaio 00:17:13.080 direct=1 00:17:13.080 bs=4096 00:17:13.080 iodepth=1 00:17:13.080 norandommap=0 00:17:13.080 numjobs=1 00:17:13.080 00:17:13.080 verify_dump=1 00:17:13.080 verify_backlog=512 00:17:13.080 verify_state_save=0 00:17:13.080 do_verify=1 00:17:13.080 verify=crc32c-intel 00:17:13.080 [job0] 00:17:13.080 filename=/dev/nvme0n1 00:17:13.080 [job1] 00:17:13.080 filename=/dev/nvme0n2 00:17:13.080 [job2] 00:17:13.080 filename=/dev/nvme0n3 00:17:13.080 [job3] 00:17:13.080 filename=/dev/nvme0n4 00:17:13.080 Could not set queue depth (nvme0n1) 00:17:13.080 Could not set queue depth (nvme0n2) 00:17:13.080 Could not set queue depth (nvme0n3) 00:17:13.080 Could not set queue depth (nvme0n4) 00:17:13.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.337 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.337 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.337 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.337 fio-3.35 00:17:13.337 Starting 4 threads 00:17:14.707 00:17:14.707 job0: (groupid=0, jobs=1): err= 0: pid=591345: Mon Jul 15 11:27:57 2024 00:17:14.707 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:17:14.707 slat (nsec): min=10797, max=25230, avg=23692.41, stdev=3006.52 00:17:14.707 clat (usec): min=40788, max=41998, avg=41220.82, stdev=430.85 00:17:14.707 lat (usec): min=40812, max=42022, avg=41244.51, stdev=430.61 00:17:14.707 clat percentiles (usec): 00:17:14.707 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:14.707 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:14.707 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:14.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.707 | 99.99th=[42206] 00:17:14.707 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:17:14.707 slat (nsec): min=11019, max=43089, avg=13448.68, stdev=2154.96 00:17:14.707 clat (usec): min=152, max=261, avg=199.30, stdev=18.06 00:17:14.707 lat (usec): min=166, max=274, avg=212.75, stdev=18.33 00:17:14.707 clat percentiles (usec): 00:17:14.707 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:17:14.707 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:17:14.707 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:17:14.707 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 262], 99.95th=[ 262], 00:17:14.707 | 99.99th=[ 262] 00:17:14.707 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.707 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.707 lat (usec) : 250=94.57%, 500=1.31% 00:17:14.707 lat (msec) : 50=4.12% 00:17:14.707 cpu : usr=0.79%, sys=0.59%, ctx=537, majf=0, minf=1 00:17:14.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.707 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.707 job1: (groupid=0, jobs=1): err= 0: pid=591368: Mon Jul 15 11:27:57 2024 00:17:14.707 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:17:14.707 slat (nsec): min=9461, max=23390, avg=22399.73, stdev=2894.77 00:17:14.707 clat (usec): min=40500, max=42005, avg=41034.84, stdev=328.70 00:17:14.707 lat (usec): min=40509, max=42028, avg=41057.24, stdev=329.72 00:17:14.707 clat percentiles (usec): 00:17:14.707 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:14.707 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:14.707 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:17:14.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.707 | 99.99th=[42206] 00:17:14.707 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:17:14.707 slat (nsec): min=9984, max=45385, avg=11374.55, stdev=2540.10 00:17:14.707 clat (usec): min=131, max=260, avg=186.03, stdev=20.32 00:17:14.707 lat (usec): min=141, max=298, avg=197.41, stdev=20.60 00:17:14.707 clat percentiles (usec): 00:17:14.707 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:17:14.708 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:17:14.708 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:17:14.708 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 262], 99.95th=[ 262], 00:17:14.708 | 99.99th=[ 262] 00:17:14.708 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.708 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.708 lat (usec) : 250=95.69%, 500=0.19% 00:17:14.708 lat (msec) : 50=4.12% 00:17:14.708 cpu : usr=0.50%, sys=0.40%, ctx=535, majf=0, minf=1 00:17:14.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.708 job2: (groupid=0, jobs=1): err= 0: pid=591376: Mon Jul 15 11:27:57 2024 00:17:14.708 read: IOPS=2293, BW=9175KiB/s (9395kB/s)(9184KiB/1001msec) 00:17:14.708 slat (nsec): min=6494, max=24907, avg=7437.21, stdev=818.26 00:17:14.708 clat (usec): min=215, max=400, avg=246.96, stdev=13.17 00:17:14.708 lat (usec): min=222, max=407, avg=254.40, stdev=13.16 00:17:14.708 clat percentiles (usec): 00:17:14.708 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:17:14.708 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:17:14.708 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:17:14.708 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 392], 99.95th=[ 396], 00:17:14.708 | 99.99th=[ 400] 00:17:14.708 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:17:14.708 slat (nsec): min=9398, max=36780, avg=10741.25, stdev=1406.51 00:17:14.708 clat (usec): min=120, max=315, avg=147.26, stdev=17.02 00:17:14.708 lat (usec): min=131, max=333, avg=158.00, stdev=17.59 00:17:14.708 clat percentiles (usec): 00:17:14.708 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:17:14.708 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:17:14.708 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 180], 00:17:14.708 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 289], 99.95th=[ 302], 00:17:14.708 | 99.99th=[ 314] 00:17:14.708 bw ( KiB/s): min=11528, max=11528, per=71.63%, avg=11528.00, stdev= 0.00, samples=1 00:17:14.708 iops : min= 2882, max= 2882, avg=2882.00, stdev= 0.00, samples=1 00:17:14.708 lat (usec) : 250=82.33%, 500=17.67% 00:17:14.708 cpu : usr=2.30%, sys=4.80%, ctx=4857, majf=0, minf=1 00:17:14.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 issued rwts: total=2296,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.708 job3: (groupid=0, jobs=1): err= 0: pid=591377: Mon Jul 15 11:27:57 2024 00:17:14.708 read: IOPS=22, BW=91.5KiB/s (93.6kB/s)(92.0KiB/1006msec) 00:17:14.708 slat (nsec): min=9423, max=26084, avg=12137.13, stdev=4170.90 00:17:14.708 clat (usec): min=281, max=41107, avg=39197.36, stdev=8483.80 00:17:14.708 lat (usec): min=292, max=41119, avg=39209.50, stdev=8483.93 00:17:14.708 clat percentiles (usec): 00:17:14.708 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:14.708 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:14.708 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:14.708 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:14.708 | 99.99th=[41157] 00:17:14.708 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:17:14.708 slat (nsec): min=10746, max=57727, avg=12940.90, stdev=2885.32 00:17:14.708 clat (usec): min=148, max=328, avg=187.58, stdev=22.76 00:17:14.708 lat (usec): min=160, max=357, avg=200.52, stdev=23.54 00:17:14.708 clat percentiles (usec): 00:17:14.708 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:17:14.708 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:17:14.708 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 221], 00:17:14.708 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 330], 99.95th=[ 330], 00:17:14.708 | 99.99th=[ 330] 00:17:14.708 bw ( KiB/s): min= 4096, max= 4096, per=25.45%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.708 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.708 lat (usec) : 250=93.83%, 500=2.06% 00:17:14.708 lat (msec) : 50=4.11% 00:17:14.708 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=2 00:17:14.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.708 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.708 00:17:14.708 Run status group 0 (all jobs): 00:17:14.708 READ: bw=9285KiB/s (9508kB/s), 86.4KiB/s-9175KiB/s (88.5kB/s-9395kB/s), io=9452KiB (9679kB), run=1001-1018msec 00:17:14.708 WRITE: bw=15.7MiB/s (16.5MB/s), 2012KiB/s-9.99MiB/s (2060kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1018msec 00:17:14.708 00:17:14.708 Disk stats (read/write): 00:17:14.708 nvme0n1: ios=40/512, merge=0/0, ticks=1644/101, in_queue=1745, util=97.29% 00:17:14.708 nvme0n2: ios=41/512, merge=0/0, ticks=1683/92, in_queue=1775, util=97.63% 00:17:14.708 nvme0n3: ios=1870/2048, merge=0/0, ticks=1392/283, in_queue=1675, util=97.72% 00:17:14.708 nvme0n4: ios=18/512, merge=0/0, ticks=697/88, in_queue=785, util=89.16% 00:17:14.708 11:27:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:14.708 [global] 00:17:14.708 thread=1 00:17:14.708 invalidate=1 00:17:14.708 rw=randwrite 00:17:14.708 time_based=1 00:17:14.708 runtime=1 00:17:14.708 ioengine=libaio 00:17:14.708 direct=1 00:17:14.708 bs=4096 00:17:14.708 iodepth=1 00:17:14.708 norandommap=0 00:17:14.708 numjobs=1 00:17:14.708 00:17:14.708 verify_dump=1 00:17:14.708 verify_backlog=512 00:17:14.708 verify_state_save=0 00:17:14.708 do_verify=1 00:17:14.708 verify=crc32c-intel 00:17:14.708 [job0] 00:17:14.708 filename=/dev/nvme0n1 00:17:14.708 [job1] 00:17:14.708 filename=/dev/nvme0n2 00:17:14.708 [job2] 00:17:14.708 filename=/dev/nvme0n3 00:17:14.708 [job3] 00:17:14.708 filename=/dev/nvme0n4 00:17:14.708 Could not set queue depth (nvme0n1) 00:17:14.708 Could not set queue depth (nvme0n2) 00:17:14.708 Could not set queue depth (nvme0n3) 00:17:14.708 Could not set queue depth (nvme0n4) 00:17:14.708 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.708 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.708 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.708 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.708 fio-3.35 00:17:14.708 Starting 4 threads 00:17:16.078 00:17:16.078 job0: (groupid=0, jobs=1): err= 0: pid=591741: Mon Jul 15 11:27:59 2024 00:17:16.078 read: IOPS=1021, BW=4087KiB/s (4185kB/s)(4136KiB/1012msec) 00:17:16.078 slat (nsec): min=7193, max=18029, avg=8045.91, stdev=1019.95 00:17:16.078 clat (usec): min=233, max=42027, avg=671.50, stdev=4004.61 00:17:16.078 lat (usec): min=241, max=42043, avg=679.54, stdev=4005.08 00:17:16.078 clat percentiles (usec): 00:17:16.078 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:17:16.078 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:17:16.078 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 302], 00:17:16.078 | 99.00th=[ 465], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:17:16.078 | 99.99th=[42206] 00:17:16.078 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:17:16.078 slat (nsec): min=8742, max=60980, avg=11741.90, stdev=4242.92 00:17:16.078 clat (usec): min=134, max=349, avg=184.61, stdev=30.39 00:17:16.078 lat (usec): min=144, max=374, avg=196.35, stdev=31.73 00:17:16.078 clat percentiles (usec): 00:17:16.078 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:17:16.078 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:17:16.078 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 237], 95.00th=[ 253], 00:17:16.078 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 351], 00:17:16.078 | 99.99th=[ 351] 00:17:16.078 bw ( KiB/s): min= 4024, max= 8264, per=24.84%, avg=6144.00, stdev=2998.13, samples=2 00:17:16.078 iops : min= 1006, max= 2066, avg=1536.00, stdev=749.53, samples=2 00:17:16.078 lat (usec) : 250=57.28%, 500=42.33% 00:17:16.078 lat (msec) : 50=0.39% 00:17:16.078 cpu : usr=1.19%, sys=2.87%, ctx=2570, majf=0, minf=2 00:17:16.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.078 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.078 job1: (groupid=0, jobs=1): err= 0: pid=591742: Mon Jul 15 11:27:59 2024 00:17:16.078 read: IOPS=2037, BW=8152KiB/s (8347kB/s)(8160KiB/1001msec) 00:17:16.078 slat (nsec): min=6702, max=36263, avg=7797.50, stdev=1206.67 00:17:16.078 clat (usec): min=240, max=469, avg=288.68, stdev=51.67 00:17:16.078 lat (usec): min=247, max=477, avg=296.48, stdev=51.72 00:17:16.078 clat percentiles (usec): 00:17:16.078 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 262], 00:17:16.078 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:17:16.078 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 400], 95.00th=[ 437], 00:17:16.078 | 99.00th=[ 457], 99.50th=[ 461], 99.90th=[ 465], 99.95th=[ 469], 00:17:16.078 | 99.99th=[ 469] 00:17:16.078 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:16.078 slat (nsec): min=10014, max=40522, avg=11172.72, stdev=1505.82 00:17:16.078 clat (usec): min=145, max=314, avg=175.87, stdev=12.26 00:17:16.078 lat (usec): min=156, max=350, avg=187.04, stdev=12.51 00:17:16.078 clat percentiles (usec): 00:17:16.078 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:17:16.079 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:17:16.079 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 196], 00:17:16.079 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 249], 99.95th=[ 255], 00:17:16.079 | 99.99th=[ 314] 00:17:16.079 bw ( KiB/s): min= 8192, max= 8192, per=33.12%, avg=8192.00, stdev= 0.00, samples=1 00:17:16.079 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:16.079 lat (usec) : 250=51.59%, 500=48.41% 00:17:16.079 cpu : usr=2.60%, sys=7.10%, ctx=4088, majf=0, minf=1 00:17:16.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.079 job2: (groupid=0, jobs=1): err= 0: pid=591743: Mon Jul 15 11:27:59 2024 00:17:16.079 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:17:16.079 slat (nsec): min=10535, max=24591, avg=22501.64, stdev=2779.14 00:17:16.079 clat (usec): min=40829, max=42523, avg=41369.70, stdev=540.70 00:17:16.079 lat (usec): min=40839, max=42547, avg=41392.21, stdev=541.54 00:17:16.079 clat percentiles (usec): 00:17:16.079 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:16.079 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:17:16.079 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.079 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:16.079 | 99.99th=[42730] 00:17:16.079 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:16.079 slat (nsec): min=8927, max=37756, avg=11178.85, stdev=2225.37 00:17:16.079 clat (usec): min=156, max=299, avg=207.53, stdev=30.51 00:17:16.079 lat (usec): min=166, max=312, avg=218.71, stdev=30.17 00:17:16.079 clat percentiles (usec): 00:17:16.079 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:17:16.079 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:17:16.079 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 253], 00:17:16.079 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 302], 00:17:16.079 | 99.99th=[ 302] 00:17:16.079 bw ( KiB/s): min= 4096, max= 4096, per=16.56%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.079 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.079 lat (usec) : 250=89.33%, 500=6.55% 00:17:16.079 lat (msec) : 50=4.12% 00:17:16.079 cpu : usr=0.49%, sys=0.49%, ctx=536, majf=0, minf=1 00:17:16.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.079 job3: (groupid=0, jobs=1): err= 0: pid=591744: Mon Jul 15 11:27:59 2024 00:17:16.079 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:16.079 slat (nsec): min=7154, max=20568, avg=8352.60, stdev=1146.76 00:17:16.079 clat (usec): min=205, max=379, avg=261.69, stdev=23.42 00:17:16.079 lat (usec): min=213, max=387, avg=270.05, stdev=23.54 00:17:16.079 clat percentiles (usec): 00:17:16.079 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:17:16.079 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:17:16.079 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:17:16.079 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 375], 99.95th=[ 375], 00:17:16.079 | 99.99th=[ 379] 00:17:16.079 write: IOPS=2233, BW=8935KiB/s (9150kB/s)(8944KiB/1001msec); 0 zone resets 00:17:16.079 slat (nsec): min=10178, max=35171, avg=11627.08, stdev=1540.93 00:17:16.079 clat (usec): min=149, max=273, avg=182.23, stdev=13.25 00:17:16.079 lat (usec): min=160, max=308, avg=193.86, stdev=13.50 00:17:16.079 clat percentiles (usec): 00:17:16.079 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:17:16.079 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:17:16.079 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:17:16.079 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 243], 99.95th=[ 249], 00:17:16.079 | 99.99th=[ 273] 00:17:16.079 bw ( KiB/s): min= 8192, max= 8192, per=33.12%, avg=8192.00, stdev= 0.00, samples=1 00:17:16.079 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:16.079 lat (usec) : 250=70.10%, 500=29.90% 00:17:16.079 cpu : usr=5.10%, sys=5.40%, ctx=4284, majf=0, minf=1 00:17:16.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.079 issued rwts: total=2048,2236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.079 00:17:16.079 Run status group 0 (all jobs): 00:17:16.079 READ: bw=19.6MiB/s (20.6MB/s), 85.9KiB/s-8184KiB/s (88.0kB/s-8380kB/s), io=20.1MiB (21.1MB), run=1001-1024msec 00:17:16.079 WRITE: bw=24.2MiB/s (25.3MB/s), 2000KiB/s-8935KiB/s (2048kB/s-9150kB/s), io=24.7MiB (25.9MB), run=1001-1024msec 00:17:16.079 00:17:16.079 Disk stats (read/write): 00:17:16.079 nvme0n1: ios=1080/1536, merge=0/0, ticks=535/277, in_queue=812, util=86.67% 00:17:16.079 nvme0n2: ios=1595/2048, merge=0/0, ticks=540/340, in_queue=880, util=90.96% 00:17:16.079 nvme0n3: ios=41/512, merge=0/0, ticks=1690/104, in_queue=1794, util=98.23% 00:17:16.079 nvme0n4: ios=1615/2048, merge=0/0, ticks=410/364, in_queue=774, util=89.62% 00:17:16.079 11:27:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:16.079 [global] 00:17:16.079 thread=1 00:17:16.079 invalidate=1 00:17:16.079 rw=write 00:17:16.079 time_based=1 00:17:16.079 runtime=1 00:17:16.079 ioengine=libaio 00:17:16.079 direct=1 00:17:16.079 bs=4096 00:17:16.079 iodepth=128 00:17:16.079 norandommap=0 00:17:16.079 numjobs=1 00:17:16.079 00:17:16.079 verify_dump=1 00:17:16.079 verify_backlog=512 00:17:16.079 verify_state_save=0 00:17:16.079 do_verify=1 00:17:16.079 verify=crc32c-intel 00:17:16.079 [job0] 00:17:16.079 filename=/dev/nvme0n1 00:17:16.079 [job1] 00:17:16.079 filename=/dev/nvme0n2 00:17:16.079 [job2] 00:17:16.079 filename=/dev/nvme0n3 00:17:16.079 [job3] 00:17:16.079 filename=/dev/nvme0n4 00:17:16.079 Could not set queue depth (nvme0n1) 00:17:16.079 Could not set queue depth (nvme0n2) 00:17:16.079 Could not set queue depth (nvme0n3) 00:17:16.079 Could not set queue depth (nvme0n4) 00:17:16.336 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.336 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.336 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.336 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.336 fio-3.35 00:17:16.336 Starting 4 threads 00:17:17.706 00:17:17.706 job0: (groupid=0, jobs=1): err= 0: pid=592119: Mon Jul 15 11:28:01 2024 00:17:17.706 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:17:17.706 slat (nsec): min=1019, max=36886k, avg=171405.62, stdev=1424601.23 00:17:17.706 clat (usec): min=4543, max=98558, avg=20970.25, stdev=17495.64 00:17:17.706 lat (msec): min=4, max=112, avg=21.14, stdev=17.65 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:17:17.706 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:17:17.706 | 70.00th=[13435], 80.00th=[33424], 90.00th=[52691], 95.00th=[56886], 00:17:17.706 | 99.00th=[69731], 99.50th=[87557], 99.90th=[94897], 99.95th=[94897], 00:17:17.706 | 99.99th=[98042] 00:17:17.706 write: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1009msec); 0 zone resets 00:17:17.706 slat (nsec): min=1932, max=23549k, avg=127632.30, stdev=819876.04 00:17:17.706 clat (usec): min=466, max=91707, avg=16993.74, stdev=14220.87 00:17:17.706 lat (usec): min=3795, max=91711, avg=17121.38, stdev=14269.73 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 5407], 5.00th=[ 7439], 10.00th=[ 9372], 20.00th=[10683], 00:17:17.706 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:17:17.706 | 70.00th=[12518], 80.00th=[22414], 90.00th=[31327], 95.00th=[49546], 00:17:17.706 | 99.00th=[91751], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:17:17.706 | 99.99th=[91751] 00:17:17.706 bw ( KiB/s): min= 6928, max=20232, per=18.26%, avg=13580.00, stdev=9407.35, samples=2 00:17:17.706 iops : min= 1732, max= 5058, avg=3395.00, stdev=2351.84, samples=2 00:17:17.706 lat (usec) : 500=0.02% 00:17:17.706 lat (msec) : 4=0.12%, 10=12.62%, 20=64.76%, 50=14.86%, 100=7.63% 00:17:17.706 cpu : usr=1.59%, sys=2.58%, ctx=345, majf=0, minf=1 00:17:17.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:17:17.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.706 issued rwts: total=3072,3523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.706 job1: (groupid=0, jobs=1): err= 0: pid=592120: Mon Jul 15 11:28:01 2024 00:17:17.706 read: IOPS=6088, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1008msec) 00:17:17.706 slat (nsec): min=1335, max=10298k, avg=90875.07, stdev=644199.56 00:17:17.706 clat (usec): min=3127, max=21646, avg=11251.27, stdev=2795.67 00:17:17.706 lat (usec): min=4029, max=21674, avg=11342.14, stdev=2838.93 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 4686], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9503], 00:17:17.706 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:17:17.706 | 70.00th=[11469], 80.00th=[12649], 90.00th=[15664], 95.00th=[17171], 00:17:17.706 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:17:17.706 | 99.99th=[21627] 00:17:17.706 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:17:17.706 slat (usec): min=2, max=8804, avg=66.43, stdev=384.80 00:17:17.706 clat (usec): min=728, max=21016, avg=9568.54, stdev=2255.98 00:17:17.706 lat (usec): min=739, max=21029, avg=9634.97, stdev=2290.71 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 2933], 5.00th=[ 4817], 10.00th=[ 6783], 20.00th=[ 7963], 00:17:17.706 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:17:17.706 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:17:17.706 | 99.00th=[15270], 99.50th=[17695], 99.90th=[20317], 99.95th=[20579], 00:17:17.706 | 99.99th=[21103] 00:17:17.706 bw ( KiB/s): min=24576, max=24576, per=33.04%, avg=24576.00, stdev= 0.00, samples=2 00:17:17.706 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:17:17.706 lat (usec) : 750=0.04%, 1000=0.01% 00:17:17.706 lat (msec) : 2=0.20%, 4=1.49%, 10=34.70%, 20=62.97%, 50=0.59% 00:17:17.706 cpu : usr=4.17%, sys=6.06%, ctx=677, majf=0, minf=1 00:17:17.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:17.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.706 issued rwts: total=6137,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.706 job2: (groupid=0, jobs=1): err= 0: pid=592121: Mon Jul 15 11:28:01 2024 00:17:17.706 read: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(17.5MiB/1043msec) 00:17:17.706 slat (nsec): min=1115, max=12991k, avg=116443.01, stdev=869175.84 00:17:17.706 clat (usec): min=4506, max=53376, avg=16160.28, stdev=7050.06 00:17:17.706 lat (usec): min=4512, max=56656, avg=16276.73, stdev=7083.45 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 5538], 5.00th=[10028], 10.00th=[11338], 20.00th=[12649], 00:17:17.706 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14484], 00:17:17.706 | 70.00th=[16188], 80.00th=[19792], 90.00th=[22676], 95.00th=[25560], 00:17:17.706 | 99.00th=[49546], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:17:17.706 | 99.99th=[53216] 00:17:17.706 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:17:17.706 slat (usec): min=2, max=11522, avg=93.52, stdev=603.51 00:17:17.706 clat (usec): min=1503, max=28865, avg=12889.56, stdev=2988.68 00:17:17.706 lat (usec): min=1519, max=28873, avg=12983.08, stdev=3051.62 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 4228], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[11076], 00:17:17.706 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:17:17.706 | 70.00th=[14091], 80.00th=[14615], 90.00th=[16188], 95.00th=[17171], 00:17:17.706 | 99.00th=[20317], 99.50th=[22414], 99.90th=[26608], 99.95th=[26608], 00:17:17.706 | 99.99th=[28967] 00:17:17.706 bw ( KiB/s): min=16896, max=19968, per=24.78%, avg=18432.00, stdev=2172.23, samples=2 00:17:17.706 iops : min= 4224, max= 4992, avg=4608.00, stdev=543.06, samples=2 00:17:17.706 lat (msec) : 2=0.02%, 4=0.43%, 10=8.72%, 20=80.36%, 50=10.01% 00:17:17.706 lat (msec) : 100=0.46% 00:17:17.706 cpu : usr=4.61%, sys=4.13%, ctx=383, majf=0, minf=1 00:17:17.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:17.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.706 issued rwts: total=4490,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.706 job3: (groupid=0, jobs=1): err= 0: pid=592122: Mon Jul 15 11:28:01 2024 00:17:17.706 read: IOPS=5086, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1005msec) 00:17:17.706 slat (nsec): min=1317, max=12366k, avg=109679.94, stdev=841417.93 00:17:17.706 clat (usec): min=1851, max=34216, avg=13747.92, stdev=3942.77 00:17:17.706 lat (usec): min=3300, max=34244, avg=13857.60, stdev=4006.63 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 4621], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11207], 00:17:17.706 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:17:17.706 | 70.00th=[14484], 80.00th=[17433], 90.00th=[20055], 95.00th=[21365], 00:17:17.706 | 99.00th=[24511], 99.50th=[24511], 99.90th=[28705], 99.95th=[30278], 00:17:17.706 | 99.99th=[34341] 00:17:17.706 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:17.706 slat (usec): min=2, max=13008, avg=78.03, stdev=425.44 00:17:17.706 clat (usec): min=1602, max=25888, avg=11175.61, stdev=2603.61 00:17:17.706 lat (usec): min=1614, max=25920, avg=11253.64, stdev=2644.44 00:17:17.706 clat percentiles (usec): 00:17:17.706 | 1.00th=[ 3097], 5.00th=[ 5473], 10.00th=[ 7635], 20.00th=[ 9765], 00:17:17.706 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11994], 00:17:17.706 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13173], 95.00th=[13829], 00:17:17.706 | 99.00th=[16712], 99.50th=[19530], 99.90th=[24511], 99.95th=[24511], 00:17:17.706 | 99.99th=[25822] 00:17:17.706 bw ( KiB/s): min=20480, max=20480, per=27.53%, avg=20480.00, stdev= 0.00, samples=2 00:17:17.706 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:17:17.706 lat (msec) : 2=0.20%, 4=1.43%, 10=12.63%, 20=80.24%, 50=5.51% 00:17:17.706 cpu : usr=3.78%, sys=4.78%, ctx=636, majf=0, minf=1 00:17:17.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:17.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.706 issued rwts: total=5112,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.706 00:17:17.706 Run status group 0 (all jobs): 00:17:17.706 READ: bw=70.5MiB/s (73.9MB/s), 11.9MiB/s-23.8MiB/s (12.5MB/s-24.9MB/s), io=73.5MiB (77.0MB), run=1005-1043msec 00:17:17.706 WRITE: bw=72.6MiB/s (76.2MB/s), 13.6MiB/s-23.8MiB/s (14.3MB/s-25.0MB/s), io=75.8MiB (79.4MB), run=1005-1043msec 00:17:17.706 00:17:17.706 Disk stats (read/write): 00:17:17.706 nvme0n1: ios=2929/3072, merge=0/0, ticks=16853/10625, in_queue=27478, util=91.48% 00:17:17.706 nvme0n2: ios=5170/5206, merge=0/0, ticks=53454/48155, in_queue=101609, util=91.18% 00:17:17.706 nvme0n3: ios=3682/4096, merge=0/0, ticks=46833/42487, in_queue=89320, util=98.75% 00:17:17.706 nvme0n4: ios=4205/4608, merge=0/0, ticks=53607/50875, in_queue=104482, util=95.60% 00:17:17.706 11:28:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:17.706 [global] 00:17:17.706 thread=1 00:17:17.706 invalidate=1 00:17:17.706 rw=randwrite 00:17:17.706 time_based=1 00:17:17.706 runtime=1 00:17:17.706 ioengine=libaio 00:17:17.706 direct=1 00:17:17.706 bs=4096 00:17:17.706 iodepth=128 00:17:17.706 norandommap=0 00:17:17.706 numjobs=1 00:17:17.706 00:17:17.706 verify_dump=1 00:17:17.706 verify_backlog=512 00:17:17.706 verify_state_save=0 00:17:17.706 do_verify=1 00:17:17.706 verify=crc32c-intel 00:17:17.706 [job0] 00:17:17.706 filename=/dev/nvme0n1 00:17:17.706 [job1] 00:17:17.706 filename=/dev/nvme0n2 00:17:17.706 [job2] 00:17:17.706 filename=/dev/nvme0n3 00:17:17.706 [job3] 00:17:17.706 filename=/dev/nvme0n4 00:17:17.706 Could not set queue depth (nvme0n1) 00:17:17.706 Could not set queue depth (nvme0n2) 00:17:17.706 Could not set queue depth (nvme0n3) 00:17:17.706 Could not set queue depth (nvme0n4) 00:17:17.963 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.963 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.963 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.963 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.963 fio-3.35 00:17:17.963 Starting 4 threads 00:17:19.334 00:17:19.334 job0: (groupid=0, jobs=1): err= 0: pid=592488: Mon Jul 15 11:28:02 2024 00:17:19.334 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:17:19.334 slat (nsec): min=1098, max=18781k, avg=85155.92, stdev=654920.23 00:17:19.334 clat (usec): min=2242, max=42532, avg=11701.44, stdev=5372.39 00:17:19.334 lat (usec): min=2251, max=45117, avg=11786.60, stdev=5416.51 00:17:19.334 clat percentiles (usec): 00:17:19.334 | 1.00th=[ 3752], 5.00th=[ 6521], 10.00th=[ 8094], 20.00th=[ 8586], 00:17:19.334 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:17:19.334 | 70.00th=[11731], 80.00th=[13304], 90.00th=[19268], 95.00th=[25822], 00:17:19.334 | 99.00th=[29230], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:17:19.334 | 99.99th=[42730] 00:17:19.334 write: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(23.2MiB/1011msec); 0 zone resets 00:17:19.334 slat (usec): min=2, max=19785, avg=66.00, stdev=470.94 00:17:19.334 clat (usec): min=407, max=146009, avg=12513.05, stdev=14108.69 00:17:19.334 lat (usec): min=435, max=146017, avg=12579.06, stdev=14159.62 00:17:19.334 clat percentiles (usec): 00:17:19.334 | 1.00th=[ 963], 5.00th=[ 2442], 10.00th=[ 3720], 20.00th=[ 5014], 00:17:19.334 | 30.00th=[ 6194], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9896], 00:17:19.334 | 70.00th=[ 10290], 80.00th=[ 15008], 90.00th=[ 31851], 95.00th=[ 36439], 00:17:19.334 | 99.00th=[ 80217], 99.50th=[108528], 99.90th=[132645], 99.95th=[139461], 00:17:19.334 | 99.99th=[145753] 00:17:19.334 bw ( KiB/s): min=17912, max=28672, per=33.62%, avg=23292.00, stdev=7608.47, samples=2 00:17:19.334 iops : min= 4478, max= 7168, avg=5823.00, stdev=1902.12, samples=2 00:17:19.334 lat (usec) : 500=0.02%, 750=0.09%, 1000=0.78% 00:17:19.334 lat (msec) : 2=1.55%, 4=4.39%, 10=52.65%, 20=28.50%, 50=11.09% 00:17:19.334 lat (msec) : 100=0.59%, 250=0.34% 00:17:19.334 cpu : usr=4.85%, sys=5.35%, ctx=525, majf=0, minf=1 00:17:19.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:19.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.334 issued rwts: total=4608,5950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.334 job1: (groupid=0, jobs=1): err= 0: pid=592489: Mon Jul 15 11:28:02 2024 00:17:19.334 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:17:19.334 slat (nsec): min=1470, max=13127k, avg=94081.60, stdev=619140.81 00:17:19.334 clat (usec): min=5865, max=46638, avg=11956.66, stdev=5909.01 00:17:19.334 lat (usec): min=5869, max=46644, avg=12050.74, stdev=5962.25 00:17:19.334 clat percentiles (usec): 00:17:19.334 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9372], 00:17:19.334 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:17:19.334 | 70.00th=[10945], 80.00th=[12125], 90.00th=[16712], 95.00th=[25560], 00:17:19.334 | 99.00th=[38011], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:17:19.334 | 99.99th=[46400] 00:17:19.334 write: IOPS=5573, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1008msec); 0 zone resets 00:17:19.334 slat (nsec): min=1918, max=6986.0k, avg=86758.48, stdev=466618.20 00:17:19.334 clat (usec): min=4044, max=55061, avg=11817.34, stdev=7495.13 00:17:19.334 lat (usec): min=4752, max=55065, avg=11904.10, stdev=7542.73 00:17:19.334 clat percentiles (usec): 00:17:19.334 | 1.00th=[ 5997], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8455], 00:17:19.334 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:17:19.334 | 70.00th=[10290], 80.00th=[10814], 90.00th=[15926], 95.00th=[28443], 00:17:19.334 | 99.00th=[49021], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:17:19.334 | 99.99th=[55313] 00:17:19.334 bw ( KiB/s): min=17432, max=26496, per=31.70%, avg=21964.00, stdev=6409.22, samples=2 00:17:19.334 iops : min= 4358, max= 6624, avg=5491.00, stdev=1602.30, samples=2 00:17:19.334 lat (msec) : 10=49.95%, 20=42.48%, 50=7.05%, 100=0.51% 00:17:19.334 cpu : usr=4.37%, sys=5.56%, ctx=518, majf=0, minf=1 00:17:19.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:19.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.334 issued rwts: total=5120,5618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.334 job2: (groupid=0, jobs=1): err= 0: pid=592495: Mon Jul 15 11:28:02 2024 00:17:19.334 read: IOPS=3546, BW=13.9MiB/s (14.5MB/s)(14.5MiB/1046msec) 00:17:19.335 slat (nsec): min=1353, max=18437k, avg=125860.56, stdev=910940.20 00:17:19.335 clat (usec): min=3548, max=68421, avg=16742.56, stdev=9231.27 00:17:19.335 lat (usec): min=3559, max=68426, avg=16868.42, stdev=9277.49 00:17:19.335 clat percentiles (usec): 00:17:19.335 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10814], 00:17:19.335 | 30.00th=[11600], 40.00th=[13566], 50.00th=[15533], 60.00th=[16319], 00:17:19.335 | 70.00th=[17171], 80.00th=[19268], 90.00th=[23462], 95.00th=[26346], 00:17:19.335 | 99.00th=[62653], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:17:19.335 | 99.99th=[68682] 00:17:19.335 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:17:19.335 slat (usec): min=2, max=12608, avg=124.76, stdev=722.47 00:17:19.335 clat (usec): min=1526, max=68427, avg=17185.10, stdev=7266.91 00:17:19.335 lat (usec): min=1539, max=68433, avg=17309.87, stdev=7323.43 00:17:19.335 clat percentiles (usec): 00:17:19.335 | 1.00th=[ 4555], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[11207], 00:17:19.335 | 30.00th=[11863], 40.00th=[14353], 50.00th=[16450], 60.00th=[18220], 00:17:19.335 | 70.00th=[18482], 80.00th=[22938], 90.00th=[28705], 95.00th=[32637], 00:17:19.335 | 99.00th=[36439], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:17:19.335 | 99.99th=[68682] 00:17:19.335 bw ( KiB/s): min=16368, max=16384, per=23.63%, avg=16376.00, stdev=11.31, samples=2 00:17:19.335 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:17:19.335 lat (msec) : 2=0.03%, 4=0.44%, 10=13.44%, 20=64.92%, 50=19.56% 00:17:19.335 lat (msec) : 100=1.61% 00:17:19.335 cpu : usr=3.44%, sys=4.40%, ctx=379, majf=0, minf=1 00:17:19.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:19.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.335 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.335 job3: (groupid=0, jobs=1): err= 0: pid=592497: Mon Jul 15 11:28:02 2024 00:17:19.335 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:17:19.335 slat (usec): min=2, max=25361, avg=215.90, stdev=1459.11 00:17:19.335 clat (usec): min=7673, max=96211, avg=21103.64, stdev=17123.67 00:17:19.335 lat (usec): min=7679, max=96220, avg=21319.55, stdev=17315.19 00:17:19.335 clat percentiles (usec): 00:17:19.335 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11600], 00:17:19.335 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[15139], 00:17:19.335 | 70.00th=[16057], 80.00th=[28181], 90.00th=[41681], 95.00th=[65799], 00:17:19.335 | 99.00th=[85459], 99.50th=[88605], 99.90th=[95945], 99.95th=[95945], 00:17:19.335 | 99.99th=[95945] 00:17:19.335 write: IOPS=2435, BW=9742KiB/s (9976kB/s)(9820KiB/1008msec); 0 zone resets 00:17:19.335 slat (usec): min=2, max=26769, avg=221.85, stdev=1257.74 00:17:19.335 clat (msec): min=5, max=119, avg=34.01, stdev=23.47 00:17:19.335 lat (msec): min=10, max=119, avg=34.23, stdev=23.61 00:17:19.335 clat percentiles (msec): 00:17:19.335 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 19], 00:17:19.335 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 32], 00:17:19.335 | 70.00th=[ 35], 80.00th=[ 45], 90.00th=[ 67], 95.00th=[ 93], 00:17:19.335 | 99.00th=[ 107], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 120], 00:17:19.335 | 99.99th=[ 120] 00:17:19.335 bw ( KiB/s): min= 8192, max=10424, per=13.43%, avg=9308.00, stdev=1578.26, samples=2 00:17:19.335 iops : min= 2048, max= 2606, avg=2327.00, stdev=394.57, samples=2 00:17:19.335 lat (msec) : 10=2.66%, 20=52.08%, 50=32.47%, 100=11.30%, 250=1.49% 00:17:19.335 cpu : usr=2.28%, sys=2.18%, ctx=306, majf=0, minf=1 00:17:19.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:19.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.335 issued rwts: total=2048,2455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.335 00:17:19.335 Run status group 0 (all jobs): 00:17:19.335 READ: bw=57.8MiB/s (60.6MB/s), 8127KiB/s-19.8MiB/s (8322kB/s-20.8MB/s), io=60.5MiB (63.4MB), run=1008-1046msec 00:17:19.335 WRITE: bw=67.7MiB/s (71.0MB/s), 9742KiB/s-23.0MiB/s (9976kB/s-24.1MB/s), io=70.8MiB (74.2MB), run=1008-1046msec 00:17:19.335 00:17:19.335 Disk stats (read/write): 00:17:19.335 nvme0n1: ios=4119/5127, merge=0/0, ticks=47117/58483, in_queue=105600, util=98.60% 00:17:19.335 nvme0n2: ios=4608/4911, merge=0/0, ticks=27164/23097, in_queue=50261, util=86.41% 00:17:19.335 nvme0n3: ios=3117/3510, merge=0/0, ticks=48915/57318, in_queue=106233, util=97.09% 00:17:19.335 nvme0n4: ios=1580/1983, merge=0/0, ticks=21734/30768, in_queue=52502, util=98.53% 00:17:19.335 11:28:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:19.335 11:28:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=592730 00:17:19.335 11:28:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:19.335 11:28:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:19.335 [global] 00:17:19.335 thread=1 00:17:19.335 invalidate=1 00:17:19.335 rw=read 00:17:19.335 time_based=1 00:17:19.335 runtime=10 00:17:19.335 ioengine=libaio 00:17:19.335 direct=1 00:17:19.335 bs=4096 00:17:19.335 iodepth=1 00:17:19.335 norandommap=1 00:17:19.335 numjobs=1 00:17:19.335 00:17:19.335 [job0] 00:17:19.335 filename=/dev/nvme0n1 00:17:19.335 [job1] 00:17:19.335 filename=/dev/nvme0n2 00:17:19.335 [job2] 00:17:19.335 filename=/dev/nvme0n3 00:17:19.335 [job3] 00:17:19.335 filename=/dev/nvme0n4 00:17:19.335 Could not set queue depth (nvme0n1) 00:17:19.335 Could not set queue depth (nvme0n2) 00:17:19.335 Could not set queue depth (nvme0n3) 00:17:19.335 Could not set queue depth (nvme0n4) 00:17:19.591 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.591 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.591 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.591 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.591 fio-3.35 00:17:19.591 Starting 4 threads 00:17:22.160 11:28:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:22.417 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39526400, buflen=4096 00:17:22.417 fio: pid=592871, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.417 11:28:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:22.673 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.673 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:22.673 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=22396928, buflen=4096 00:17:22.673 fio: pid=592870, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.931 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.931 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:22.931 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=376832, buflen=4096 00:17:22.931 fio: pid=592868, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.931 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2560000, buflen=4096 00:17:22.931 fio: pid=592869, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:22.931 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.931 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:22.931 00:17:22.931 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=592868: Mon Jul 15 11:28:06 2024 00:17:22.931 read: IOPS=30, BW=121KiB/s (124kB/s)(368KiB/3043msec) 00:17:22.931 slat (nsec): min=3490, max=66474, avg=14307.09, stdev=9148.74 00:17:22.931 clat (usec): min=235, max=42097, avg=32831.31, stdev=16683.23 00:17:22.931 lat (usec): min=239, max=42105, avg=32845.52, stdev=16686.67 00:17:22.931 clat percentiles (usec): 00:17:22.931 | 1.00th=[ 237], 5.00th=[ 273], 10.00th=[ 310], 20.00th=[ 594], 00:17:22.931 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:22.931 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:22.931 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:22.931 | 99.99th=[42206] 00:17:22.931 bw ( KiB/s): min= 96, max= 239, per=0.64%, avg=126.20, stdev=63.15, samples=5 00:17:22.931 iops : min= 24, max= 59, avg=31.40, stdev=15.45, samples=5 00:17:22.931 lat (usec) : 250=3.23%, 500=16.13%, 750=1.08% 00:17:22.931 lat (msec) : 50=78.49% 00:17:22.931 cpu : usr=0.00%, sys=0.07%, ctx=95, majf=0, minf=1 00:17:22.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.931 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=592869: Mon Jul 15 11:28:06 2024 00:17:22.931 read: IOPS=193, BW=775KiB/s (793kB/s)(2500KiB/3227msec) 00:17:22.931 slat (usec): min=6, max=14767, avg=40.75, stdev=618.41 00:17:22.931 clat (usec): min=204, max=41851, avg=5074.41, stdev=13174.83 00:17:22.931 lat (usec): min=212, max=45922, avg=5115.21, stdev=13205.89 00:17:22.931 clat percentiles (usec): 00:17:22.931 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:17:22.931 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:17:22.931 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:17:22.931 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:17:22.931 | 99.99th=[41681] 00:17:22.931 bw ( KiB/s): min= 96, max= 158, per=0.55%, avg=107.50, stdev=24.90, samples=6 00:17:22.931 iops : min= 24, max= 39, avg=26.67, stdev= 6.06, samples=6 00:17:22.931 lat (usec) : 250=47.28%, 500=40.58%, 750=0.16% 00:17:22.931 lat (msec) : 50=11.82% 00:17:22.931 cpu : usr=0.19%, sys=0.28%, ctx=628, majf=0, minf=1 00:17:22.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.931 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=592870: Mon Jul 15 11:28:06 2024 00:17:22.931 read: IOPS=1914, BW=7658KiB/s (7842kB/s)(21.4MiB/2856msec) 00:17:22.931 slat (nsec): min=3919, max=74972, avg=9095.87, stdev=1933.90 00:17:22.931 clat (usec): min=203, max=42038, avg=507.71, stdev=3175.81 00:17:22.931 lat (usec): min=219, max=42051, avg=516.81, stdev=3176.41 00:17:22.931 clat percentiles (usec): 00:17:22.931 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:17:22.931 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:17:22.931 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 343], 95.00th=[ 359], 00:17:22.931 | 99.00th=[ 457], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:17:22.931 | 99.99th=[42206] 00:17:22.931 bw ( KiB/s): min= 384, max=15337, per=42.58%, avg=8357.00, stdev=7180.59, samples=5 00:17:22.931 iops : min= 96, max= 3834, avg=2089.20, stdev=1795.09, samples=5 00:17:22.931 lat (usec) : 250=52.13%, 500=47.12%, 750=0.11%, 1000=0.02% 00:17:22.931 lat (msec) : 50=0.60% 00:17:22.931 cpu : usr=0.63%, sys=2.14%, ctx=5471, majf=0, minf=1 00:17:22.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 issued rwts: total=5469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.931 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=592871: Mon Jul 15 11:28:06 2024 00:17:22.931 read: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(37.7MiB/2639msec) 00:17:22.931 slat (nsec): min=5214, max=36123, avg=7263.33, stdev=924.20 00:17:22.931 clat (usec): min=215, max=633, avg=264.09, stdev=30.86 00:17:22.931 lat (usec): min=223, max=669, avg=271.36, stdev=30.91 00:17:22.931 clat percentiles (usec): 00:17:22.931 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:17:22.931 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:17:22.931 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:17:22.931 | 99.00th=[ 441], 99.50th=[ 449], 99.90th=[ 461], 99.95th=[ 469], 00:17:22.931 | 99.99th=[ 635] 00:17:22.931 bw ( KiB/s): min=13620, max=15400, per=74.80%, avg=14682.40, stdev=785.36, samples=5 00:17:22.931 iops : min= 3405, max= 3850, avg=3670.60, stdev=196.34, samples=5 00:17:22.931 lat (usec) : 250=27.58%, 500=72.40%, 750=0.01% 00:17:22.931 cpu : usr=1.18%, sys=3.07%, ctx=9652, majf=0, minf=2 00:17:22.931 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.931 issued rwts: total=9651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.931 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.931 00:17:22.931 Run status group 0 (all jobs): 00:17:22.931 READ: bw=19.2MiB/s (20.1MB/s), 121KiB/s-14.3MiB/s (124kB/s-15.0MB/s), io=61.9MiB (64.9MB), run=2639-3227msec 00:17:22.931 00:17:22.931 Disk stats (read/write): 00:17:22.931 nvme0n1: ios=120/0, merge=0/0, ticks=2993/0, in_queue=2993, util=98.87% 00:17:22.931 nvme0n2: ios=83/0, merge=0/0, ticks=3038/0, in_queue=3038, util=94.41% 00:17:22.931 nvme0n3: ios=5509/0, merge=0/0, ticks=3903/0, in_queue=3903, util=100.00% 00:17:22.931 nvme0n4: ios=9419/0, merge=0/0, ticks=2581/0, in_queue=2581, util=98.80% 00:17:23.188 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.188 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:23.445 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.445 11:28:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:23.702 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.702 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:23.702 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.702 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 592730 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:23.959 nvmf hotplug test: fio failed as expected 00:17:23.959 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.216 rmmod nvme_tcp 00:17:24.216 rmmod nvme_fabrics 00:17:24.216 rmmod nvme_keyring 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 589817 ']' 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 589817 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 589817 ']' 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 589817 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.216 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 589817 00:17:24.475 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.475 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.475 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 589817' 00:17:24.475 killing process with pid 589817 00:17:24.475 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 589817 00:17:24.475 11:28:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 589817 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.475 11:28:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.014 11:28:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.014 00:17:27.014 real 0m26.685s 00:17:27.014 user 1m46.137s 00:17:27.014 sys 0m8.197s 00:17:27.014 11:28:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.014 11:28:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.014 ************************************ 00:17:27.014 END TEST nvmf_fio_target 00:17:27.014 ************************************ 00:17:27.014 11:28:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.014 11:28:10 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:27.014 11:28:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.014 11:28:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.014 11:28:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.014 ************************************ 00:17:27.014 START TEST nvmf_bdevio 00:17:27.014 ************************************ 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:27.014 * Looking for test storage... 00:17:27.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.014 11:28:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.015 11:28:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.015 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.015 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.015 11:28:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.015 11:28:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:32.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.291 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:32.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:32.292 Found net devices under 0000:86:00.0: cvl_0_0 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:32.292 Found net devices under 0000:86:00.1: cvl_0_1 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.292 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.550 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.550 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.550 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.550 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.550 11:28:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:32.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:17:32.550 00:17:32.550 --- 10.0.0.2 ping statistics --- 00:17:32.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.550 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:32.550 00:17:32.550 --- 10.0.0.1 ping statistics --- 00:17:32.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.550 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=597109 00:17:32.550 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 597109 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 597109 ']' 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.551 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:32.551 [2024-07-15 11:28:16.117862] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:32.551 [2024-07-15 11:28:16.117903] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.808 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.808 [2024-07-15 11:28:16.185416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.808 [2024-07-15 11:28:16.264356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.808 [2024-07-15 11:28:16.264392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.808 [2024-07-15 11:28:16.264399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.808 [2024-07-15 11:28:16.264404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.808 [2024-07-15 11:28:16.264410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.808 [2024-07-15 11:28:16.264522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:32.808 [2024-07-15 11:28:16.264630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:32.808 [2024-07-15 11:28:16.264732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.808 [2024-07-15 11:28:16.264733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.372 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.372 [2024-07-15 11:28:16.962083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.629 Malloc0 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.629 11:28:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.629 [2024-07-15 11:28:17.013796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:33.629 { 00:17:33.629 "params": { 00:17:33.629 "name": "Nvme$subsystem", 00:17:33.629 "trtype": "$TEST_TRANSPORT", 00:17:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:33.629 "adrfam": "ipv4", 00:17:33.629 "trsvcid": "$NVMF_PORT", 00:17:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:33.629 "hdgst": ${hdgst:-false}, 00:17:33.629 "ddgst": ${ddgst:-false} 00:17:33.629 }, 00:17:33.629 "method": "bdev_nvme_attach_controller" 00:17:33.629 } 00:17:33.629 EOF 00:17:33.629 )") 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:33.629 11:28:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:33.629 "params": { 00:17:33.629 "name": "Nvme1", 00:17:33.629 "trtype": "tcp", 00:17:33.629 "traddr": "10.0.0.2", 00:17:33.629 "adrfam": "ipv4", 00:17:33.629 "trsvcid": "4420", 00:17:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.629 "hdgst": false, 00:17:33.629 "ddgst": false 00:17:33.629 }, 00:17:33.629 "method": "bdev_nvme_attach_controller" 00:17:33.629 }' 00:17:33.629 [2024-07-15 11:28:17.062554] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:33.629 [2024-07-15 11:28:17.062596] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597356 ] 00:17:33.629 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.629 [2024-07-15 11:28:17.129805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:33.629 [2024-07-15 11:28:17.204953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.629 [2024-07-15 11:28:17.205058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.629 [2024-07-15 11:28:17.205059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.886 I/O targets: 00:17:33.886 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:33.886 00:17:33.886 00:17:33.886 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.886 http://cunit.sourceforge.net/ 00:17:33.886 00:17:33.886 00:17:33.886 Suite: bdevio tests on: Nvme1n1 00:17:33.886 Test: blockdev write read block ...passed 00:17:34.142 Test: blockdev write zeroes read block ...passed 00:17:34.142 Test: blockdev write zeroes read no split ...passed 00:17:34.142 Test: blockdev write zeroes read split ...passed 00:17:34.142 Test: blockdev write zeroes read split partial ...passed 00:17:34.142 Test: blockdev reset ...[2024-07-15 11:28:17.606756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.142 [2024-07-15 11:28:17.606817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25256d0 (9): Bad file descriptor 00:17:34.399 [2024-07-15 11:28:17.741989] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:34.399 passed 00:17:34.399 Test: blockdev write read 8 blocks ...passed 00:17:34.399 Test: blockdev write read size > 128k ...passed 00:17:34.399 Test: blockdev write read invalid size ...passed 00:17:34.399 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:34.399 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:34.399 Test: blockdev write read max offset ...passed 00:17:34.399 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:34.399 Test: blockdev writev readv 8 blocks ...passed 00:17:34.399 Test: blockdev writev readv 30 x 1block ...passed 00:17:34.399 Test: blockdev writev readv block ...passed 00:17:34.399 Test: blockdev writev readv size > 128k ...passed 00:17:34.399 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:34.399 Test: blockdev comparev and writev ...[2024-07-15 11:28:17.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.952380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.952649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.952672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.952949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.952972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.952979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.953218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.953236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:34.399 [2024-07-15 11:28:17.953248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.399 [2024-07-15 11:28:17.953255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:34.657 passed 00:17:34.657 Test: blockdev nvme passthru rw ...passed 00:17:34.657 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:28:18.035642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-07-15 11:28:18.035667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:34.657 [2024-07-15 11:28:18.035788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-07-15 11:28:18.035799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:34.657 [2024-07-15 11:28:18.035912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-07-15 11:28:18.035923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:34.657 [2024-07-15 11:28:18.036042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.657 [2024-07-15 11:28:18.036052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:34.657 passed 00:17:34.657 Test: blockdev nvme admin passthru ...passed 00:17:34.657 Test: blockdev copy ...passed 00:17:34.657 00:17:34.657 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.657 suites 1 1 n/a 0 0 00:17:34.657 tests 23 23 23 0 0 00:17:34.657 asserts 152 152 152 0 n/a 00:17:34.657 00:17:34.657 Elapsed time = 1.380 seconds 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.941 rmmod nvme_tcp 00:17:34.941 rmmod nvme_fabrics 00:17:34.941 rmmod nvme_keyring 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 597109 ']' 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 597109 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 597109 ']' 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 597109 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597109 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:34.941 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597109' 00:17:34.941 killing process with pid 597109 00:17:34.942 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 597109 00:17:34.942 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 597109 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.201 11:28:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.103 11:28:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.103 00:17:37.103 real 0m10.469s 00:17:37.103 user 0m13.188s 00:17:37.103 sys 0m4.859s 00:17:37.103 11:28:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.103 11:28:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.103 ************************************ 00:17:37.103 END TEST nvmf_bdevio 00:17:37.103 ************************************ 00:17:37.103 11:28:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:37.103 11:28:20 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:37.103 11:28:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:37.103 11:28:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.103 11:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.363 ************************************ 00:17:37.363 START TEST nvmf_auth_target 00:17:37.363 ************************************ 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:37.363 * Looking for test storage... 00:17:37.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.363 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.364 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:43.933 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:43.933 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:43.933 Found net devices under 0000:86:00.0: cvl_0_0 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:43.933 Found net devices under 0000:86:00.1: cvl_0_1 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:43.933 00:17:43.933 --- 10.0.0.2 ping statistics --- 00:17:43.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.933 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:17:43.933 00:17:43.933 --- 10.0.0.1 ping statistics --- 00:17:43.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.933 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.933 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=601056 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 601056 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 601056 ']' 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.934 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=601134 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=feac08509ae52a2ca2c9ba7661c40c7a290b419cd727575f 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NIt 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key feac08509ae52a2ca2c9ba7661c40c7a290b419cd727575f 0 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 feac08509ae52a2ca2c9ba7661c40c7a290b419cd727575f 0 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=feac08509ae52a2ca2c9ba7661c40c7a290b419cd727575f 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:43.934 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NIt 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NIt 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.NIt 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fb8b348f21ce303be89befcc5a6c4c8f6de5cfce52a310546762a42dc407c51d 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OrK 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fb8b348f21ce303be89befcc5a6c4c8f6de5cfce52a310546762a42dc407c51d 3 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fb8b348f21ce303be89befcc5a6c4c8f6de5cfce52a310546762a42dc407c51d 3 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fb8b348f21ce303be89befcc5a6c4c8f6de5cfce52a310546762a42dc407c51d 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OrK 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OrK 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.OrK 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.226 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=50d894555bf55981906ac4175d3c9bab 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8vK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 50d894555bf55981906ac4175d3c9bab 1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 50d894555bf55981906ac4175d3c9bab 1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=50d894555bf55981906ac4175d3c9bab 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8vK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8vK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8vK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba3a0f9f51b7d118eb3930af6345734beb14e760956cda5b 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IgK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba3a0f9f51b7d118eb3930af6345734beb14e760956cda5b 2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba3a0f9f51b7d118eb3930af6345734beb14e760956cda5b 2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba3a0f9f51b7d118eb3930af6345734beb14e760956cda5b 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IgK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IgK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IgK 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1436e5867148516b200d6eef57cda57d19a91aeb76249826 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sPN 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1436e5867148516b200d6eef57cda57d19a91aeb76249826 2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1436e5867148516b200d6eef57cda57d19a91aeb76249826 2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1436e5867148516b200d6eef57cda57d19a91aeb76249826 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sPN 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sPN 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.sPN 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c1e8617541c92920284108ce5b470919 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.g3A 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c1e8617541c92920284108ce5b470919 1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c1e8617541c92920284108ce5b470919 1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c1e8617541c92920284108ce5b470919 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:44.227 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.g3A 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.g3A 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.g3A 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:44.494 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a3db9e2980934fcc2fbbd235b41a93a052547127c958a5952e7c2e7ed2ba9a6 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lBL 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a3db9e2980934fcc2fbbd235b41a93a052547127c958a5952e7c2e7ed2ba9a6 3 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a3db9e2980934fcc2fbbd235b41a93a052547127c958a5952e7c2e7ed2ba9a6 3 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a3db9e2980934fcc2fbbd235b41a93a052547127c958a5952e7c2e7ed2ba9a6 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lBL 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lBL 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.lBL 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 601056 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 601056 ']' 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.495 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 601134 /var/tmp/host.sock 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 601134 ']' 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:44.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.495 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NIt 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NIt 00:17:44.753 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NIt 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.OrK ]] 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OrK 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OrK 00:17:45.011 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OrK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8vK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8vK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8vK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IgK ]] 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IgK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IgK 00:17:45.269 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IgK 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sPN 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sPN 00:17:45.527 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sPN 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.g3A ]] 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g3A 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g3A 00:17:45.785 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g3A 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lBL 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lBL 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lBL 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.043 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.300 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.558 00:17:46.558 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.558 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.558 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.817 { 00:17:46.817 "cntlid": 1, 00:17:46.817 "qid": 0, 00:17:46.817 "state": "enabled", 00:17:46.817 "thread": "nvmf_tgt_poll_group_000", 00:17:46.817 "listen_address": { 00:17:46.817 "trtype": "TCP", 00:17:46.817 "adrfam": "IPv4", 00:17:46.817 "traddr": "10.0.0.2", 00:17:46.817 "trsvcid": "4420" 00:17:46.817 }, 00:17:46.817 "peer_address": { 00:17:46.817 "trtype": "TCP", 00:17:46.817 "adrfam": "IPv4", 00:17:46.817 "traddr": "10.0.0.1", 00:17:46.817 "trsvcid": "56470" 00:17:46.817 }, 00:17:46.817 "auth": { 00:17:46.817 "state": "completed", 00:17:46.817 "digest": "sha256", 00:17:46.817 "dhgroup": "null" 00:17:46.817 } 00:17:46.817 } 00:17:46.817 ]' 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.817 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.075 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.641 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.899 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.157 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.157 { 00:17:48.157 "cntlid": 3, 00:17:48.157 "qid": 0, 00:17:48.157 "state": "enabled", 00:17:48.157 "thread": "nvmf_tgt_poll_group_000", 00:17:48.157 "listen_address": { 00:17:48.157 "trtype": "TCP", 00:17:48.157 "adrfam": "IPv4", 00:17:48.157 "traddr": "10.0.0.2", 00:17:48.157 "trsvcid": "4420" 00:17:48.157 }, 00:17:48.157 "peer_address": { 00:17:48.157 "trtype": "TCP", 00:17:48.157 "adrfam": "IPv4", 00:17:48.157 "traddr": "10.0.0.1", 00:17:48.157 "trsvcid": "56480" 00:17:48.157 }, 00:17:48.157 "auth": { 00:17:48.157 "state": "completed", 00:17:48.157 "digest": "sha256", 00:17:48.157 "dhgroup": "null" 00:17:48.157 } 00:17:48.157 } 00:17:48.157 ]' 00:17:48.157 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.415 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.673 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.238 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.496 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.496 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.496 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.496 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.496 00:17:49.496 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.496 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.496 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.752 { 00:17:49.752 "cntlid": 5, 00:17:49.752 "qid": 0, 00:17:49.752 "state": "enabled", 00:17:49.752 "thread": "nvmf_tgt_poll_group_000", 00:17:49.752 "listen_address": { 00:17:49.752 "trtype": "TCP", 00:17:49.752 "adrfam": "IPv4", 00:17:49.752 "traddr": "10.0.0.2", 00:17:49.752 "trsvcid": "4420" 00:17:49.752 }, 00:17:49.752 "peer_address": { 00:17:49.752 "trtype": "TCP", 00:17:49.752 "adrfam": "IPv4", 00:17:49.752 "traddr": "10.0.0.1", 00:17:49.752 "trsvcid": "56508" 00:17:49.752 }, 00:17:49.752 "auth": { 00:17:49.752 "state": "completed", 00:17:49.752 "digest": "sha256", 00:17:49.752 "dhgroup": "null" 00:17:49.752 } 00:17:49.752 } 00:17:49.752 ]' 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.752 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.009 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:17:50.574 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.831 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.831 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.832 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.089 00:17:51.089 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.089 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.089 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.346 { 00:17:51.346 "cntlid": 7, 00:17:51.346 "qid": 0, 00:17:51.346 "state": "enabled", 00:17:51.346 "thread": "nvmf_tgt_poll_group_000", 00:17:51.346 "listen_address": { 00:17:51.346 "trtype": "TCP", 00:17:51.346 "adrfam": "IPv4", 00:17:51.346 "traddr": "10.0.0.2", 00:17:51.346 "trsvcid": "4420" 00:17:51.346 }, 00:17:51.346 "peer_address": { 00:17:51.346 "trtype": "TCP", 00:17:51.346 "adrfam": "IPv4", 00:17:51.346 "traddr": "10.0.0.1", 00:17:51.346 "trsvcid": "56532" 00:17:51.346 }, 00:17:51.346 "auth": { 00:17:51.346 "state": "completed", 00:17:51.346 "digest": "sha256", 00:17:51.346 "dhgroup": "null" 00:17:51.346 } 00:17:51.346 } 00:17:51.346 ]' 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.346 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.603 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.167 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.424 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.681 00:17:52.681 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.681 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.681 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.939 { 00:17:52.939 "cntlid": 9, 00:17:52.939 "qid": 0, 00:17:52.939 "state": "enabled", 00:17:52.939 "thread": "nvmf_tgt_poll_group_000", 00:17:52.939 "listen_address": { 00:17:52.939 "trtype": "TCP", 00:17:52.939 "adrfam": "IPv4", 00:17:52.939 "traddr": "10.0.0.2", 00:17:52.939 "trsvcid": "4420" 00:17:52.939 }, 00:17:52.939 "peer_address": { 00:17:52.939 "trtype": "TCP", 00:17:52.939 "adrfam": "IPv4", 00:17:52.939 "traddr": "10.0.0.1", 00:17:52.939 "trsvcid": "56568" 00:17:52.939 }, 00:17:52.939 "auth": { 00:17:52.939 "state": "completed", 00:17:52.939 "digest": "sha256", 00:17:52.939 "dhgroup": "ffdhe2048" 00:17:52.939 } 00:17:52.939 } 00:17:52.939 ]' 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.939 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.196 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.789 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.048 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.048 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.306 { 00:17:54.306 "cntlid": 11, 00:17:54.306 "qid": 0, 00:17:54.306 "state": "enabled", 00:17:54.306 "thread": "nvmf_tgt_poll_group_000", 00:17:54.306 "listen_address": { 00:17:54.306 "trtype": "TCP", 00:17:54.306 "adrfam": "IPv4", 00:17:54.306 "traddr": "10.0.0.2", 00:17:54.306 "trsvcid": "4420" 00:17:54.306 }, 00:17:54.306 "peer_address": { 00:17:54.306 "trtype": "TCP", 00:17:54.306 "adrfam": "IPv4", 00:17:54.306 "traddr": "10.0.0.1", 00:17:54.306 "trsvcid": "43308" 00:17:54.306 }, 00:17:54.306 "auth": { 00:17:54.306 "state": "completed", 00:17:54.306 "digest": "sha256", 00:17:54.306 "dhgroup": "ffdhe2048" 00:17:54.306 } 00:17:54.306 } 00:17:54.306 ]' 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.306 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.564 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.564 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.564 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.564 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.564 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.564 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:17:55.129 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.129 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.386 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.644 00:17:55.644 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.644 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.644 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.902 { 00:17:55.902 "cntlid": 13, 00:17:55.902 "qid": 0, 00:17:55.902 "state": "enabled", 00:17:55.902 "thread": "nvmf_tgt_poll_group_000", 00:17:55.902 "listen_address": { 00:17:55.902 "trtype": "TCP", 00:17:55.902 "adrfam": "IPv4", 00:17:55.902 "traddr": "10.0.0.2", 00:17:55.902 "trsvcid": "4420" 00:17:55.902 }, 00:17:55.902 "peer_address": { 00:17:55.902 "trtype": "TCP", 00:17:55.902 "adrfam": "IPv4", 00:17:55.902 "traddr": "10.0.0.1", 00:17:55.902 "trsvcid": "43332" 00:17:55.902 }, 00:17:55.902 "auth": { 00:17:55.902 "state": "completed", 00:17:55.902 "digest": "sha256", 00:17:55.902 "dhgroup": "ffdhe2048" 00:17:55.902 } 00:17:55.902 } 00:17:55.902 ]' 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.902 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.159 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.726 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.984 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.242 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.242 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.501 { 00:17:57.501 "cntlid": 15, 00:17:57.501 "qid": 0, 00:17:57.501 "state": "enabled", 00:17:57.501 "thread": "nvmf_tgt_poll_group_000", 00:17:57.501 "listen_address": { 00:17:57.501 "trtype": "TCP", 00:17:57.501 "adrfam": "IPv4", 00:17:57.501 "traddr": "10.0.0.2", 00:17:57.501 "trsvcid": "4420" 00:17:57.501 }, 00:17:57.501 "peer_address": { 00:17:57.501 "trtype": "TCP", 00:17:57.501 "adrfam": "IPv4", 00:17:57.501 "traddr": "10.0.0.1", 00:17:57.501 "trsvcid": "43348" 00:17:57.501 }, 00:17:57.501 "auth": { 00:17:57.501 "state": "completed", 00:17:57.501 "digest": "sha256", 00:17:57.501 "dhgroup": "ffdhe2048" 00:17:57.501 } 00:17:57.501 } 00:17:57.501 ]' 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.501 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.759 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.395 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.654 00:17:58.654 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.654 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.654 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.912 { 00:17:58.912 "cntlid": 17, 00:17:58.912 "qid": 0, 00:17:58.912 "state": "enabled", 00:17:58.912 "thread": "nvmf_tgt_poll_group_000", 00:17:58.912 "listen_address": { 00:17:58.912 "trtype": "TCP", 00:17:58.912 "adrfam": "IPv4", 00:17:58.912 "traddr": "10.0.0.2", 00:17:58.912 "trsvcid": "4420" 00:17:58.912 }, 00:17:58.912 "peer_address": { 00:17:58.912 "trtype": "TCP", 00:17:58.912 "adrfam": "IPv4", 00:17:58.912 "traddr": "10.0.0.1", 00:17:58.912 "trsvcid": "43376" 00:17:58.912 }, 00:17:58.912 "auth": { 00:17:58.912 "state": "completed", 00:17:58.912 "digest": "sha256", 00:17:58.912 "dhgroup": "ffdhe3072" 00:17:58.912 } 00:17:58.912 } 00:17:58.912 ]' 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.912 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.171 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:17:59.738 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.738 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.738 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.738 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.739 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.739 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.739 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.739 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.996 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.254 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.254 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.512 { 00:18:00.512 "cntlid": 19, 00:18:00.512 "qid": 0, 00:18:00.512 "state": "enabled", 00:18:00.512 "thread": "nvmf_tgt_poll_group_000", 00:18:00.512 "listen_address": { 00:18:00.512 "trtype": "TCP", 00:18:00.512 "adrfam": "IPv4", 00:18:00.512 "traddr": "10.0.0.2", 00:18:00.512 "trsvcid": "4420" 00:18:00.512 }, 00:18:00.512 "peer_address": { 00:18:00.512 "trtype": "TCP", 00:18:00.512 "adrfam": "IPv4", 00:18:00.512 "traddr": "10.0.0.1", 00:18:00.512 "trsvcid": "43396" 00:18:00.512 }, 00:18:00.512 "auth": { 00:18:00.512 "state": "completed", 00:18:00.512 "digest": "sha256", 00:18:00.512 "dhgroup": "ffdhe3072" 00:18:00.512 } 00:18:00.512 } 00:18:00.512 ]' 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.512 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.771 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.339 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.598 00:18:01.598 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.598 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.598 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.856 { 00:18:01.856 "cntlid": 21, 00:18:01.856 "qid": 0, 00:18:01.856 "state": "enabled", 00:18:01.856 "thread": "nvmf_tgt_poll_group_000", 00:18:01.856 "listen_address": { 00:18:01.856 "trtype": "TCP", 00:18:01.856 "adrfam": "IPv4", 00:18:01.856 "traddr": "10.0.0.2", 00:18:01.856 "trsvcid": "4420" 00:18:01.856 }, 00:18:01.856 "peer_address": { 00:18:01.856 "trtype": "TCP", 00:18:01.856 "adrfam": "IPv4", 00:18:01.856 "traddr": "10.0.0.1", 00:18:01.856 "trsvcid": "43430" 00:18:01.856 }, 00:18:01.856 "auth": { 00:18:01.856 "state": "completed", 00:18:01.856 "digest": "sha256", 00:18:01.856 "dhgroup": "ffdhe3072" 00:18:01.856 } 00:18:01.856 } 00:18:01.856 ]' 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.856 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.114 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.681 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.939 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.198 00:18:03.198 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.198 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.198 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.457 { 00:18:03.457 "cntlid": 23, 00:18:03.457 "qid": 0, 00:18:03.457 "state": "enabled", 00:18:03.457 "thread": "nvmf_tgt_poll_group_000", 00:18:03.457 "listen_address": { 00:18:03.457 "trtype": "TCP", 00:18:03.457 "adrfam": "IPv4", 00:18:03.457 "traddr": "10.0.0.2", 00:18:03.457 "trsvcid": "4420" 00:18:03.457 }, 00:18:03.457 "peer_address": { 00:18:03.457 "trtype": "TCP", 00:18:03.457 "adrfam": "IPv4", 00:18:03.457 "traddr": "10.0.0.1", 00:18:03.457 "trsvcid": "43452" 00:18:03.457 }, 00:18:03.457 "auth": { 00:18:03.457 "state": "completed", 00:18:03.457 "digest": "sha256", 00:18:03.457 "dhgroup": "ffdhe3072" 00:18:03.457 } 00:18:03.457 } 00:18:03.457 ]' 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.457 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.715 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.281 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.539 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.798 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.798 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.056 { 00:18:05.056 "cntlid": 25, 00:18:05.056 "qid": 0, 00:18:05.056 "state": "enabled", 00:18:05.056 "thread": "nvmf_tgt_poll_group_000", 00:18:05.056 "listen_address": { 00:18:05.056 "trtype": "TCP", 00:18:05.056 "adrfam": "IPv4", 00:18:05.056 "traddr": "10.0.0.2", 00:18:05.056 "trsvcid": "4420" 00:18:05.056 }, 00:18:05.056 "peer_address": { 00:18:05.056 "trtype": "TCP", 00:18:05.056 "adrfam": "IPv4", 00:18:05.056 "traddr": "10.0.0.1", 00:18:05.056 "trsvcid": "38314" 00:18:05.056 }, 00:18:05.056 "auth": { 00:18:05.056 "state": "completed", 00:18:05.056 "digest": "sha256", 00:18:05.056 "dhgroup": "ffdhe4096" 00:18:05.056 } 00:18:05.056 } 00:18:05.056 ]' 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.056 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.314 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.881 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.140 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.399 { 00:18:06.399 "cntlid": 27, 00:18:06.399 "qid": 0, 00:18:06.399 "state": "enabled", 00:18:06.399 "thread": "nvmf_tgt_poll_group_000", 00:18:06.399 "listen_address": { 00:18:06.399 "trtype": "TCP", 00:18:06.399 "adrfam": "IPv4", 00:18:06.399 "traddr": "10.0.0.2", 00:18:06.399 "trsvcid": "4420" 00:18:06.399 }, 00:18:06.399 "peer_address": { 00:18:06.399 "trtype": "TCP", 00:18:06.399 "adrfam": "IPv4", 00:18:06.399 "traddr": "10.0.0.1", 00:18:06.399 "trsvcid": "38356" 00:18:06.399 }, 00:18:06.399 "auth": { 00:18:06.399 "state": "completed", 00:18:06.399 "digest": "sha256", 00:18:06.399 "dhgroup": "ffdhe4096" 00:18:06.399 } 00:18:06.399 } 00:18:06.399 ]' 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.399 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.657 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.657 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.657 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.657 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.657 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.916 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.482 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.482 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.741 00:18:07.741 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.741 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.741 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.000 { 00:18:08.000 "cntlid": 29, 00:18:08.000 "qid": 0, 00:18:08.000 "state": "enabled", 00:18:08.000 "thread": "nvmf_tgt_poll_group_000", 00:18:08.000 "listen_address": { 00:18:08.000 "trtype": "TCP", 00:18:08.000 "adrfam": "IPv4", 00:18:08.000 "traddr": "10.0.0.2", 00:18:08.000 "trsvcid": "4420" 00:18:08.000 }, 00:18:08.000 "peer_address": { 00:18:08.000 "trtype": "TCP", 00:18:08.000 "adrfam": "IPv4", 00:18:08.000 "traddr": "10.0.0.1", 00:18:08.000 "trsvcid": "38380" 00:18:08.000 }, 00:18:08.000 "auth": { 00:18:08.000 "state": "completed", 00:18:08.000 "digest": "sha256", 00:18:08.000 "dhgroup": "ffdhe4096" 00:18:08.000 } 00:18:08.000 } 00:18:08.000 ]' 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.000 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.259 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.259 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.259 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.259 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.826 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.827 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.827 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.084 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.085 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.343 00:18:09.343 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.343 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.343 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.601 { 00:18:09.601 "cntlid": 31, 00:18:09.601 "qid": 0, 00:18:09.601 "state": "enabled", 00:18:09.601 "thread": "nvmf_tgt_poll_group_000", 00:18:09.601 "listen_address": { 00:18:09.601 "trtype": "TCP", 00:18:09.601 "adrfam": "IPv4", 00:18:09.601 "traddr": "10.0.0.2", 00:18:09.601 "trsvcid": "4420" 00:18:09.601 }, 00:18:09.601 "peer_address": { 00:18:09.601 "trtype": "TCP", 00:18:09.601 "adrfam": "IPv4", 00:18:09.601 "traddr": "10.0.0.1", 00:18:09.601 "trsvcid": "38414" 00:18:09.601 }, 00:18:09.601 "auth": { 00:18:09.601 "state": "completed", 00:18:09.601 "digest": "sha256", 00:18:09.601 "dhgroup": "ffdhe4096" 00:18:09.601 } 00:18:09.601 } 00:18:09.601 ]' 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.601 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.858 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.423 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.681 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.940 00:18:10.940 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.940 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.940 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.198 { 00:18:11.198 "cntlid": 33, 00:18:11.198 "qid": 0, 00:18:11.198 "state": "enabled", 00:18:11.198 "thread": "nvmf_tgt_poll_group_000", 00:18:11.198 "listen_address": { 00:18:11.198 "trtype": "TCP", 00:18:11.198 "adrfam": "IPv4", 00:18:11.198 "traddr": "10.0.0.2", 00:18:11.198 "trsvcid": "4420" 00:18:11.198 }, 00:18:11.198 "peer_address": { 00:18:11.198 "trtype": "TCP", 00:18:11.198 "adrfam": "IPv4", 00:18:11.198 "traddr": "10.0.0.1", 00:18:11.198 "trsvcid": "38432" 00:18:11.198 }, 00:18:11.198 "auth": { 00:18:11.198 "state": "completed", 00:18:11.198 "digest": "sha256", 00:18:11.198 "dhgroup": "ffdhe6144" 00:18:11.198 } 00:18:11.198 } 00:18:11.198 ]' 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.198 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.456 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.456 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.456 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.456 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.059 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.060 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:12.060 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:12.317 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.318 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.576 00:18:12.576 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.576 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.576 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.835 { 00:18:12.835 "cntlid": 35, 00:18:12.835 "qid": 0, 00:18:12.835 "state": "enabled", 00:18:12.835 "thread": "nvmf_tgt_poll_group_000", 00:18:12.835 "listen_address": { 00:18:12.835 "trtype": "TCP", 00:18:12.835 "adrfam": "IPv4", 00:18:12.835 "traddr": "10.0.0.2", 00:18:12.835 "trsvcid": "4420" 00:18:12.835 }, 00:18:12.835 "peer_address": { 00:18:12.835 "trtype": "TCP", 00:18:12.835 "adrfam": "IPv4", 00:18:12.835 "traddr": "10.0.0.1", 00:18:12.835 "trsvcid": "38458" 00:18:12.835 }, 00:18:12.835 "auth": { 00:18:12.835 "state": "completed", 00:18:12.835 "digest": "sha256", 00:18:12.835 "dhgroup": "ffdhe6144" 00:18:12.835 } 00:18:12.835 } 00:18:12.835 ]' 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.835 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.093 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.660 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.919 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.177 00:18:14.177 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.177 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.177 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.435 { 00:18:14.435 "cntlid": 37, 00:18:14.435 "qid": 0, 00:18:14.435 "state": "enabled", 00:18:14.435 "thread": "nvmf_tgt_poll_group_000", 00:18:14.435 "listen_address": { 00:18:14.435 "trtype": "TCP", 00:18:14.435 "adrfam": "IPv4", 00:18:14.435 "traddr": "10.0.0.2", 00:18:14.435 "trsvcid": "4420" 00:18:14.435 }, 00:18:14.435 "peer_address": { 00:18:14.435 "trtype": "TCP", 00:18:14.435 "adrfam": "IPv4", 00:18:14.435 "traddr": "10.0.0.1", 00:18:14.435 "trsvcid": "36014" 00:18:14.435 }, 00:18:14.435 "auth": { 00:18:14.435 "state": "completed", 00:18:14.435 "digest": "sha256", 00:18:14.435 "dhgroup": "ffdhe6144" 00:18:14.435 } 00:18:14.435 } 00:18:14.435 ]' 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.435 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.435 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.435 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.693 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.693 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.693 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.693 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.260 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.519 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.777 00:18:15.777 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.777 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.777 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.036 { 00:18:16.036 "cntlid": 39, 00:18:16.036 "qid": 0, 00:18:16.036 "state": "enabled", 00:18:16.036 "thread": "nvmf_tgt_poll_group_000", 00:18:16.036 "listen_address": { 00:18:16.036 "trtype": "TCP", 00:18:16.036 "adrfam": "IPv4", 00:18:16.036 "traddr": "10.0.0.2", 00:18:16.036 "trsvcid": "4420" 00:18:16.036 }, 00:18:16.036 "peer_address": { 00:18:16.036 "trtype": "TCP", 00:18:16.036 "adrfam": "IPv4", 00:18:16.036 "traddr": "10.0.0.1", 00:18:16.036 "trsvcid": "36044" 00:18:16.036 }, 00:18:16.036 "auth": { 00:18:16.036 "state": "completed", 00:18:16.036 "digest": "sha256", 00:18:16.036 "dhgroup": "ffdhe6144" 00:18:16.036 } 00:18:16.036 } 00:18:16.036 ]' 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.036 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.296 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.296 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.296 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.296 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.863 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.122 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.687 00:18:17.687 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.687 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.687 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.945 { 00:18:17.945 "cntlid": 41, 00:18:17.945 "qid": 0, 00:18:17.945 "state": "enabled", 00:18:17.945 "thread": "nvmf_tgt_poll_group_000", 00:18:17.945 "listen_address": { 00:18:17.945 "trtype": "TCP", 00:18:17.945 "adrfam": "IPv4", 00:18:17.945 "traddr": "10.0.0.2", 00:18:17.945 "trsvcid": "4420" 00:18:17.945 }, 00:18:17.945 "peer_address": { 00:18:17.945 "trtype": "TCP", 00:18:17.945 "adrfam": "IPv4", 00:18:17.945 "traddr": "10.0.0.1", 00:18:17.945 "trsvcid": "36074" 00:18:17.945 }, 00:18:17.945 "auth": { 00:18:17.945 "state": "completed", 00:18:17.945 "digest": "sha256", 00:18:17.945 "dhgroup": "ffdhe8192" 00:18:17.945 } 00:18:17.945 } 00:18:17.945 ]' 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.945 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.203 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.767 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.025 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.283 00:18:19.555 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.555 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.555 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.555 { 00:18:19.555 "cntlid": 43, 00:18:19.555 "qid": 0, 00:18:19.555 "state": "enabled", 00:18:19.555 "thread": "nvmf_tgt_poll_group_000", 00:18:19.555 "listen_address": { 00:18:19.555 "trtype": "TCP", 00:18:19.555 "adrfam": "IPv4", 00:18:19.555 "traddr": "10.0.0.2", 00:18:19.555 "trsvcid": "4420" 00:18:19.555 }, 00:18:19.555 "peer_address": { 00:18:19.555 "trtype": "TCP", 00:18:19.555 "adrfam": "IPv4", 00:18:19.555 "traddr": "10.0.0.1", 00:18:19.555 "trsvcid": "36108" 00:18:19.555 }, 00:18:19.555 "auth": { 00:18:19.555 "state": "completed", 00:18:19.555 "digest": "sha256", 00:18:19.555 "dhgroup": "ffdhe8192" 00:18:19.555 } 00:18:19.555 } 00:18:19.555 ]' 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.555 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.821 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.823 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.823 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.823 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.823 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.823 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.388 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.647 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.214 00:18:21.214 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.214 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.214 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.473 { 00:18:21.473 "cntlid": 45, 00:18:21.473 "qid": 0, 00:18:21.473 "state": "enabled", 00:18:21.473 "thread": "nvmf_tgt_poll_group_000", 00:18:21.473 "listen_address": { 00:18:21.473 "trtype": "TCP", 00:18:21.473 "adrfam": "IPv4", 00:18:21.473 "traddr": "10.0.0.2", 00:18:21.473 "trsvcid": "4420" 00:18:21.473 }, 00:18:21.473 "peer_address": { 00:18:21.473 "trtype": "TCP", 00:18:21.473 "adrfam": "IPv4", 00:18:21.473 "traddr": "10.0.0.1", 00:18:21.473 "trsvcid": "36126" 00:18:21.473 }, 00:18:21.473 "auth": { 00:18:21.473 "state": "completed", 00:18:21.473 "digest": "sha256", 00:18:21.473 "dhgroup": "ffdhe8192" 00:18:21.473 } 00:18:21.473 } 00:18:21.473 ]' 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.473 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.731 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.298 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.557 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.557 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.557 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.815 00:18:22.815 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.815 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.815 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.074 { 00:18:23.074 "cntlid": 47, 00:18:23.074 "qid": 0, 00:18:23.074 "state": "enabled", 00:18:23.074 "thread": "nvmf_tgt_poll_group_000", 00:18:23.074 "listen_address": { 00:18:23.074 "trtype": "TCP", 00:18:23.074 "adrfam": "IPv4", 00:18:23.074 "traddr": "10.0.0.2", 00:18:23.074 "trsvcid": "4420" 00:18:23.074 }, 00:18:23.074 "peer_address": { 00:18:23.074 "trtype": "TCP", 00:18:23.074 "adrfam": "IPv4", 00:18:23.074 "traddr": "10.0.0.1", 00:18:23.074 "trsvcid": "36162" 00:18:23.074 }, 00:18:23.074 "auth": { 00:18:23.074 "state": "completed", 00:18:23.074 "digest": "sha256", 00:18:23.074 "dhgroup": "ffdhe8192" 00:18:23.074 } 00:18:23.074 } 00:18:23.074 ]' 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.074 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.332 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.332 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.332 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.333 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.898 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.156 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.414 00:18:24.414 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.414 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.414 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.673 { 00:18:24.673 "cntlid": 49, 00:18:24.673 "qid": 0, 00:18:24.673 "state": "enabled", 00:18:24.673 "thread": "nvmf_tgt_poll_group_000", 00:18:24.673 "listen_address": { 00:18:24.673 "trtype": "TCP", 00:18:24.673 "adrfam": "IPv4", 00:18:24.673 "traddr": "10.0.0.2", 00:18:24.673 "trsvcid": "4420" 00:18:24.673 }, 00:18:24.673 "peer_address": { 00:18:24.673 "trtype": "TCP", 00:18:24.673 "adrfam": "IPv4", 00:18:24.673 "traddr": "10.0.0.1", 00:18:24.673 "trsvcid": "42848" 00:18:24.673 }, 00:18:24.673 "auth": { 00:18:24.673 "state": "completed", 00:18:24.673 "digest": "sha384", 00:18:24.673 "dhgroup": "null" 00:18:24.673 } 00:18:24.673 } 00:18:24.673 ]' 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.673 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.932 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.499 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.758 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.758 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.017 { 00:18:26.017 "cntlid": 51, 00:18:26.017 "qid": 0, 00:18:26.017 "state": "enabled", 00:18:26.017 "thread": "nvmf_tgt_poll_group_000", 00:18:26.017 "listen_address": { 00:18:26.017 "trtype": "TCP", 00:18:26.017 "adrfam": "IPv4", 00:18:26.017 "traddr": "10.0.0.2", 00:18:26.017 "trsvcid": "4420" 00:18:26.017 }, 00:18:26.017 "peer_address": { 00:18:26.017 "trtype": "TCP", 00:18:26.017 "adrfam": "IPv4", 00:18:26.017 "traddr": "10.0.0.1", 00:18:26.017 "trsvcid": "42868" 00:18:26.017 }, 00:18:26.017 "auth": { 00:18:26.017 "state": "completed", 00:18:26.017 "digest": "sha384", 00:18:26.017 "dhgroup": "null" 00:18:26.017 } 00:18:26.017 } 00:18:26.017 ]' 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.017 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.349 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.916 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.174 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.433 00:18:27.433 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.433 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.433 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.691 { 00:18:27.691 "cntlid": 53, 00:18:27.691 "qid": 0, 00:18:27.691 "state": "enabled", 00:18:27.691 "thread": "nvmf_tgt_poll_group_000", 00:18:27.691 "listen_address": { 00:18:27.691 "trtype": "TCP", 00:18:27.691 "adrfam": "IPv4", 00:18:27.691 "traddr": "10.0.0.2", 00:18:27.691 "trsvcid": "4420" 00:18:27.691 }, 00:18:27.691 "peer_address": { 00:18:27.691 "trtype": "TCP", 00:18:27.691 "adrfam": "IPv4", 00:18:27.691 "traddr": "10.0.0.1", 00:18:27.691 "trsvcid": "42888" 00:18:27.691 }, 00:18:27.691 "auth": { 00:18:27.691 "state": "completed", 00:18:27.691 "digest": "sha384", 00:18:27.691 "dhgroup": "null" 00:18:27.691 } 00:18:27.691 } 00:18:27.691 ]' 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.691 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.950 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.517 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.517 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.775 00:18:28.775 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.775 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.775 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.034 { 00:18:29.034 "cntlid": 55, 00:18:29.034 "qid": 0, 00:18:29.034 "state": "enabled", 00:18:29.034 "thread": "nvmf_tgt_poll_group_000", 00:18:29.034 "listen_address": { 00:18:29.034 "trtype": "TCP", 00:18:29.034 "adrfam": "IPv4", 00:18:29.034 "traddr": "10.0.0.2", 00:18:29.034 "trsvcid": "4420" 00:18:29.034 }, 00:18:29.034 "peer_address": { 00:18:29.034 "trtype": "TCP", 00:18:29.034 "adrfam": "IPv4", 00:18:29.034 "traddr": "10.0.0.1", 00:18:29.034 "trsvcid": "42900" 00:18:29.034 }, 00:18:29.034 "auth": { 00:18:29.034 "state": "completed", 00:18:29.034 "digest": "sha384", 00:18:29.034 "dhgroup": "null" 00:18:29.034 } 00:18:29.034 } 00:18:29.034 ]' 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.034 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.292 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.292 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.292 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.292 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.858 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.117 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.375 00:18:30.375 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.375 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.375 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.634 { 00:18:30.634 "cntlid": 57, 00:18:30.634 "qid": 0, 00:18:30.634 "state": "enabled", 00:18:30.634 "thread": "nvmf_tgt_poll_group_000", 00:18:30.634 "listen_address": { 00:18:30.634 "trtype": "TCP", 00:18:30.634 "adrfam": "IPv4", 00:18:30.634 "traddr": "10.0.0.2", 00:18:30.634 "trsvcid": "4420" 00:18:30.634 }, 00:18:30.634 "peer_address": { 00:18:30.634 "trtype": "TCP", 00:18:30.634 "adrfam": "IPv4", 00:18:30.634 "traddr": "10.0.0.1", 00:18:30.634 "trsvcid": "42928" 00:18:30.634 }, 00:18:30.634 "auth": { 00:18:30.634 "state": "completed", 00:18:30.634 "digest": "sha384", 00:18:30.634 "dhgroup": "ffdhe2048" 00:18:30.634 } 00:18:30.634 } 00:18:30.634 ]' 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.634 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.893 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.460 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.718 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.718 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.977 { 00:18:31.977 "cntlid": 59, 00:18:31.977 "qid": 0, 00:18:31.977 "state": "enabled", 00:18:31.977 "thread": "nvmf_tgt_poll_group_000", 00:18:31.977 "listen_address": { 00:18:31.977 "trtype": "TCP", 00:18:31.977 "adrfam": "IPv4", 00:18:31.977 "traddr": "10.0.0.2", 00:18:31.977 "trsvcid": "4420" 00:18:31.977 }, 00:18:31.977 "peer_address": { 00:18:31.977 "trtype": "TCP", 00:18:31.977 "adrfam": "IPv4", 00:18:31.977 "traddr": "10.0.0.1", 00:18:31.977 "trsvcid": "42956" 00:18:31.977 }, 00:18:31.977 "auth": { 00:18:31.977 "state": "completed", 00:18:31.977 "digest": "sha384", 00:18:31.977 "dhgroup": "ffdhe2048" 00:18:31.977 } 00:18:31.977 } 00:18:31.977 ]' 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.977 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.236 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:32.802 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.802 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.802 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.802 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.061 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.320 00:18:33.320 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.320 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.320 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.579 { 00:18:33.579 "cntlid": 61, 00:18:33.579 "qid": 0, 00:18:33.579 "state": "enabled", 00:18:33.579 "thread": "nvmf_tgt_poll_group_000", 00:18:33.579 "listen_address": { 00:18:33.579 "trtype": "TCP", 00:18:33.579 "adrfam": "IPv4", 00:18:33.579 "traddr": "10.0.0.2", 00:18:33.579 "trsvcid": "4420" 00:18:33.579 }, 00:18:33.579 "peer_address": { 00:18:33.579 "trtype": "TCP", 00:18:33.579 "adrfam": "IPv4", 00:18:33.579 "traddr": "10.0.0.1", 00:18:33.579 "trsvcid": "40468" 00:18:33.579 }, 00:18:33.579 "auth": { 00:18:33.579 "state": "completed", 00:18:33.579 "digest": "sha384", 00:18:33.579 "dhgroup": "ffdhe2048" 00:18:33.579 } 00:18:33.579 } 00:18:33.579 ]' 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.579 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.837 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.403 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.662 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.920 00:18:34.920 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.920 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.920 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.179 { 00:18:35.179 "cntlid": 63, 00:18:35.179 "qid": 0, 00:18:35.179 "state": "enabled", 00:18:35.179 "thread": "nvmf_tgt_poll_group_000", 00:18:35.179 "listen_address": { 00:18:35.179 "trtype": "TCP", 00:18:35.179 "adrfam": "IPv4", 00:18:35.179 "traddr": "10.0.0.2", 00:18:35.179 "trsvcid": "4420" 00:18:35.179 }, 00:18:35.179 "peer_address": { 00:18:35.179 "trtype": "TCP", 00:18:35.179 "adrfam": "IPv4", 00:18:35.179 "traddr": "10.0.0.1", 00:18:35.179 "trsvcid": "40496" 00:18:35.179 }, 00:18:35.179 "auth": { 00:18:35.179 "state": "completed", 00:18:35.179 "digest": "sha384", 00:18:35.179 "dhgroup": "ffdhe2048" 00:18:35.179 } 00:18:35.179 } 00:18:35.179 ]' 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.179 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.437 11:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.004 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.261 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.261 00:18:36.518 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.518 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.518 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.518 { 00:18:36.518 "cntlid": 65, 00:18:36.518 "qid": 0, 00:18:36.518 "state": "enabled", 00:18:36.518 "thread": "nvmf_tgt_poll_group_000", 00:18:36.518 "listen_address": { 00:18:36.518 "trtype": "TCP", 00:18:36.518 "adrfam": "IPv4", 00:18:36.518 "traddr": "10.0.0.2", 00:18:36.518 "trsvcid": "4420" 00:18:36.518 }, 00:18:36.518 "peer_address": { 00:18:36.518 "trtype": "TCP", 00:18:36.518 "adrfam": "IPv4", 00:18:36.518 "traddr": "10.0.0.1", 00:18:36.518 "trsvcid": "40536" 00:18:36.518 }, 00:18:36.518 "auth": { 00:18:36.518 "state": "completed", 00:18:36.518 "digest": "sha384", 00:18:36.518 "dhgroup": "ffdhe3072" 00:18:36.518 } 00:18:36.518 } 00:18:36.518 ]' 00:18:36.518 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.775 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.033 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:37.598 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.598 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.598 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.598 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.599 11:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.599 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.599 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.599 11:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.599 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.856 00:18:37.856 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.856 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.856 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.115 { 00:18:38.115 "cntlid": 67, 00:18:38.115 "qid": 0, 00:18:38.115 "state": "enabled", 00:18:38.115 "thread": "nvmf_tgt_poll_group_000", 00:18:38.115 "listen_address": { 00:18:38.115 "trtype": "TCP", 00:18:38.115 "adrfam": "IPv4", 00:18:38.115 "traddr": "10.0.0.2", 00:18:38.115 "trsvcid": "4420" 00:18:38.115 }, 00:18:38.115 "peer_address": { 00:18:38.115 "trtype": "TCP", 00:18:38.115 "adrfam": "IPv4", 00:18:38.115 "traddr": "10.0.0.1", 00:18:38.115 "trsvcid": "40548" 00:18:38.115 }, 00:18:38.115 "auth": { 00:18:38.115 "state": "completed", 00:18:38.115 "digest": "sha384", 00:18:38.115 "dhgroup": "ffdhe3072" 00:18:38.115 } 00:18:38.115 } 00:18:38.115 ]' 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.115 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.374 11:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.940 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.199 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.458 00:18:39.458 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.458 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.458 11:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.717 { 00:18:39.717 "cntlid": 69, 00:18:39.717 "qid": 0, 00:18:39.717 "state": "enabled", 00:18:39.717 "thread": "nvmf_tgt_poll_group_000", 00:18:39.717 "listen_address": { 00:18:39.717 "trtype": "TCP", 00:18:39.717 "adrfam": "IPv4", 00:18:39.717 "traddr": "10.0.0.2", 00:18:39.717 "trsvcid": "4420" 00:18:39.717 }, 00:18:39.717 "peer_address": { 00:18:39.717 "trtype": "TCP", 00:18:39.717 "adrfam": "IPv4", 00:18:39.717 "traddr": "10.0.0.1", 00:18:39.717 "trsvcid": "40574" 00:18:39.717 }, 00:18:39.717 "auth": { 00:18:39.717 "state": "completed", 00:18:39.717 "digest": "sha384", 00:18:39.717 "dhgroup": "ffdhe3072" 00:18:39.717 } 00:18:39.717 } 00:18:39.717 ]' 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.717 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.975 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:40.593 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.594 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.869 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.869 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.128 { 00:18:41.128 "cntlid": 71, 00:18:41.128 "qid": 0, 00:18:41.128 "state": "enabled", 00:18:41.128 "thread": "nvmf_tgt_poll_group_000", 00:18:41.128 "listen_address": { 00:18:41.128 "trtype": "TCP", 00:18:41.128 "adrfam": "IPv4", 00:18:41.128 "traddr": "10.0.0.2", 00:18:41.128 "trsvcid": "4420" 00:18:41.128 }, 00:18:41.128 "peer_address": { 00:18:41.128 "trtype": "TCP", 00:18:41.128 "adrfam": "IPv4", 00:18:41.128 "traddr": "10.0.0.1", 00:18:41.128 "trsvcid": "40606" 00:18:41.128 }, 00:18:41.128 "auth": { 00:18:41.128 "state": "completed", 00:18:41.128 "digest": "sha384", 00:18:41.128 "dhgroup": "ffdhe3072" 00:18:41.128 } 00:18:41.128 } 00:18:41.128 ]' 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.128 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.387 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.954 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.213 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.471 00:18:42.471 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.471 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.471 11:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.729 { 00:18:42.729 "cntlid": 73, 00:18:42.729 "qid": 0, 00:18:42.729 "state": "enabled", 00:18:42.729 "thread": "nvmf_tgt_poll_group_000", 00:18:42.729 "listen_address": { 00:18:42.729 "trtype": "TCP", 00:18:42.729 "adrfam": "IPv4", 00:18:42.729 "traddr": "10.0.0.2", 00:18:42.729 "trsvcid": "4420" 00:18:42.729 }, 00:18:42.729 "peer_address": { 00:18:42.729 "trtype": "TCP", 00:18:42.729 "adrfam": "IPv4", 00:18:42.729 "traddr": "10.0.0.1", 00:18:42.729 "trsvcid": "40642" 00:18:42.729 }, 00:18:42.729 "auth": { 00:18:42.729 "state": "completed", 00:18:42.729 "digest": "sha384", 00:18:42.729 "dhgroup": "ffdhe4096" 00:18:42.729 } 00:18:42.729 } 00:18:42.729 ]' 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.729 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.988 11:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.554 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.812 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.071 00:18:44.071 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.071 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.071 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.329 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.330 { 00:18:44.330 "cntlid": 75, 00:18:44.330 "qid": 0, 00:18:44.330 "state": "enabled", 00:18:44.330 "thread": "nvmf_tgt_poll_group_000", 00:18:44.330 "listen_address": { 00:18:44.330 "trtype": "TCP", 00:18:44.330 "adrfam": "IPv4", 00:18:44.330 "traddr": "10.0.0.2", 00:18:44.330 "trsvcid": "4420" 00:18:44.330 }, 00:18:44.330 "peer_address": { 00:18:44.330 "trtype": "TCP", 00:18:44.330 "adrfam": "IPv4", 00:18:44.330 "traddr": "10.0.0.1", 00:18:44.330 "trsvcid": "60322" 00:18:44.330 }, 00:18:44.330 "auth": { 00:18:44.330 "state": "completed", 00:18:44.330 "digest": "sha384", 00:18:44.330 "dhgroup": "ffdhe4096" 00:18:44.330 } 00:18:44.330 } 00:18:44.330 ]' 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.330 11:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.588 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.155 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.156 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.156 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.415 11:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.674 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.674 { 00:18:45.674 "cntlid": 77, 00:18:45.674 "qid": 0, 00:18:45.674 "state": "enabled", 00:18:45.674 "thread": "nvmf_tgt_poll_group_000", 00:18:45.674 "listen_address": { 00:18:45.674 "trtype": "TCP", 00:18:45.674 "adrfam": "IPv4", 00:18:45.674 "traddr": "10.0.0.2", 00:18:45.674 "trsvcid": "4420" 00:18:45.674 }, 00:18:45.674 "peer_address": { 00:18:45.674 "trtype": "TCP", 00:18:45.674 "adrfam": "IPv4", 00:18:45.674 "traddr": "10.0.0.1", 00:18:45.674 "trsvcid": "60344" 00:18:45.674 }, 00:18:45.674 "auth": { 00:18:45.674 "state": "completed", 00:18:45.674 "digest": "sha384", 00:18:45.674 "dhgroup": "ffdhe4096" 00:18:45.674 } 00:18:45.674 } 00:18:45.674 ]' 00:18:45.674 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.933 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.193 11:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.761 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.021 00:18:47.021 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.021 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.021 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.280 { 00:18:47.280 "cntlid": 79, 00:18:47.280 "qid": 0, 00:18:47.280 "state": "enabled", 00:18:47.280 "thread": "nvmf_tgt_poll_group_000", 00:18:47.280 "listen_address": { 00:18:47.280 "trtype": "TCP", 00:18:47.280 "adrfam": "IPv4", 00:18:47.280 "traddr": "10.0.0.2", 00:18:47.280 "trsvcid": "4420" 00:18:47.280 }, 00:18:47.280 "peer_address": { 00:18:47.280 "trtype": "TCP", 00:18:47.280 "adrfam": "IPv4", 00:18:47.280 "traddr": "10.0.0.1", 00:18:47.280 "trsvcid": "60368" 00:18:47.280 }, 00:18:47.280 "auth": { 00:18:47.280 "state": "completed", 00:18:47.280 "digest": "sha384", 00:18:47.280 "dhgroup": "ffdhe4096" 00:18:47.280 } 00:18:47.280 } 00:18:47.280 ]' 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.280 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.539 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.539 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.539 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.539 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.539 11:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.539 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.106 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.365 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.366 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.624 00:18:48.624 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.624 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.624 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.883 { 00:18:48.883 "cntlid": 81, 00:18:48.883 "qid": 0, 00:18:48.883 "state": "enabled", 00:18:48.883 "thread": "nvmf_tgt_poll_group_000", 00:18:48.883 "listen_address": { 00:18:48.883 "trtype": "TCP", 00:18:48.883 "adrfam": "IPv4", 00:18:48.883 "traddr": "10.0.0.2", 00:18:48.883 "trsvcid": "4420" 00:18:48.883 }, 00:18:48.883 "peer_address": { 00:18:48.883 "trtype": "TCP", 00:18:48.883 "adrfam": "IPv4", 00:18:48.883 "traddr": "10.0.0.1", 00:18:48.883 "trsvcid": "60384" 00:18:48.883 }, 00:18:48.883 "auth": { 00:18:48.883 "state": "completed", 00:18:48.883 "digest": "sha384", 00:18:48.883 "dhgroup": "ffdhe6144" 00:18:48.883 } 00:18:48.883 } 00:18:48.883 ]' 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.883 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.142 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.710 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.968 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.226 00:18:50.226 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.226 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.226 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.484 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.484 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.484 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.484 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.484 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.484 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.484 { 00:18:50.484 "cntlid": 83, 00:18:50.484 "qid": 0, 00:18:50.484 "state": "enabled", 00:18:50.484 "thread": "nvmf_tgt_poll_group_000", 00:18:50.484 "listen_address": { 00:18:50.484 "trtype": "TCP", 00:18:50.484 "adrfam": "IPv4", 00:18:50.484 "traddr": "10.0.0.2", 00:18:50.484 "trsvcid": "4420" 00:18:50.484 }, 00:18:50.484 "peer_address": { 00:18:50.484 "trtype": "TCP", 00:18:50.484 "adrfam": "IPv4", 00:18:50.485 "traddr": "10.0.0.1", 00:18:50.485 "trsvcid": "60418" 00:18:50.485 }, 00:18:50.485 "auth": { 00:18:50.485 "state": "completed", 00:18:50.485 "digest": "sha384", 00:18:50.485 "dhgroup": "ffdhe6144" 00:18:50.485 } 00:18:50.485 } 00:18:50.485 ]' 00:18:50.485 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.485 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.485 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.743 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.310 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.569 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.827 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.086 { 00:18:52.086 "cntlid": 85, 00:18:52.086 "qid": 0, 00:18:52.086 "state": "enabled", 00:18:52.086 "thread": "nvmf_tgt_poll_group_000", 00:18:52.086 "listen_address": { 00:18:52.086 "trtype": "TCP", 00:18:52.086 "adrfam": "IPv4", 00:18:52.086 "traddr": "10.0.0.2", 00:18:52.086 "trsvcid": "4420" 00:18:52.086 }, 00:18:52.086 "peer_address": { 00:18:52.086 "trtype": "TCP", 00:18:52.086 "adrfam": "IPv4", 00:18:52.086 "traddr": "10.0.0.1", 00:18:52.086 "trsvcid": "60442" 00:18:52.086 }, 00:18:52.086 "auth": { 00:18:52.086 "state": "completed", 00:18:52.086 "digest": "sha384", 00:18:52.086 "dhgroup": "ffdhe6144" 00:18:52.086 } 00:18:52.086 } 00:18:52.086 ]' 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.086 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.344 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.344 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.344 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.344 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.344 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.603 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.170 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.737 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.737 { 00:18:53.737 "cntlid": 87, 00:18:53.737 "qid": 0, 00:18:53.737 "state": "enabled", 00:18:53.737 "thread": "nvmf_tgt_poll_group_000", 00:18:53.737 "listen_address": { 00:18:53.737 "trtype": "TCP", 00:18:53.737 "adrfam": "IPv4", 00:18:53.737 "traddr": "10.0.0.2", 00:18:53.737 "trsvcid": "4420" 00:18:53.737 }, 00:18:53.737 "peer_address": { 00:18:53.737 "trtype": "TCP", 00:18:53.737 "adrfam": "IPv4", 00:18:53.737 "traddr": "10.0.0.1", 00:18:53.737 "trsvcid": "33876" 00:18:53.737 }, 00:18:53.737 "auth": { 00:18:53.737 "state": "completed", 00:18:53.737 "digest": "sha384", 00:18:53.737 "dhgroup": "ffdhe6144" 00:18:53.737 } 00:18:53.737 } 00:18:53.737 ]' 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.737 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.131 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.131 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.131 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.131 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:54.720 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.979 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.238 00:18:55.238 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.238 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.238 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.498 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.498 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.498 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.498 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.498 { 00:18:55.498 "cntlid": 89, 00:18:55.498 "qid": 0, 00:18:55.498 "state": "enabled", 00:18:55.498 "thread": "nvmf_tgt_poll_group_000", 00:18:55.498 "listen_address": { 00:18:55.498 "trtype": "TCP", 00:18:55.498 "adrfam": "IPv4", 00:18:55.498 "traddr": "10.0.0.2", 00:18:55.498 "trsvcid": "4420" 00:18:55.498 }, 00:18:55.498 "peer_address": { 00:18:55.498 "trtype": "TCP", 00:18:55.498 "adrfam": "IPv4", 00:18:55.498 "traddr": "10.0.0.1", 00:18:55.498 "trsvcid": "33904" 00:18:55.498 }, 00:18:55.498 "auth": { 00:18:55.498 "state": "completed", 00:18:55.498 "digest": "sha384", 00:18:55.498 "dhgroup": "ffdhe8192" 00:18:55.498 } 00:18:55.498 } 00:18:55.498 ]' 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.498 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.757 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.757 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.757 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.757 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.324 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.583 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.150 00:18:57.150 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.150 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.150 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.408 { 00:18:57.408 "cntlid": 91, 00:18:57.408 "qid": 0, 00:18:57.408 "state": "enabled", 00:18:57.408 "thread": "nvmf_tgt_poll_group_000", 00:18:57.408 "listen_address": { 00:18:57.408 "trtype": "TCP", 00:18:57.408 "adrfam": "IPv4", 00:18:57.408 "traddr": "10.0.0.2", 00:18:57.408 "trsvcid": "4420" 00:18:57.408 }, 00:18:57.408 "peer_address": { 00:18:57.408 "trtype": "TCP", 00:18:57.408 "adrfam": "IPv4", 00:18:57.408 "traddr": "10.0.0.1", 00:18:57.408 "trsvcid": "33922" 00:18:57.408 }, 00:18:57.408 "auth": { 00:18:57.408 "state": "completed", 00:18:57.408 "digest": "sha384", 00:18:57.408 "dhgroup": "ffdhe8192" 00:18:57.408 } 00:18:57.408 } 00:18:57.408 ]' 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.408 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.409 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.667 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.232 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.490 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.748 00:18:58.748 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.748 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.748 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.005 { 00:18:59.005 "cntlid": 93, 00:18:59.005 "qid": 0, 00:18:59.005 "state": "enabled", 00:18:59.005 "thread": "nvmf_tgt_poll_group_000", 00:18:59.005 "listen_address": { 00:18:59.005 "trtype": "TCP", 00:18:59.005 "adrfam": "IPv4", 00:18:59.005 "traddr": "10.0.0.2", 00:18:59.005 "trsvcid": "4420" 00:18:59.005 }, 00:18:59.005 "peer_address": { 00:18:59.005 "trtype": "TCP", 00:18:59.005 "adrfam": "IPv4", 00:18:59.005 "traddr": "10.0.0.1", 00:18:59.005 "trsvcid": "33956" 00:18:59.005 }, 00:18:59.005 "auth": { 00:18:59.005 "state": "completed", 00:18:59.005 "digest": "sha384", 00:18:59.005 "dhgroup": "ffdhe8192" 00:18:59.005 } 00:18:59.005 } 00:18:59.005 ]' 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.005 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.262 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:18:59.828 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.828 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.828 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.828 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.085 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.086 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.086 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.086 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.651 00:19:00.651 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.651 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.651 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.908 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.908 { 00:19:00.908 "cntlid": 95, 00:19:00.908 "qid": 0, 00:19:00.908 "state": "enabled", 00:19:00.908 "thread": "nvmf_tgt_poll_group_000", 00:19:00.908 "listen_address": { 00:19:00.908 "trtype": "TCP", 00:19:00.909 "adrfam": "IPv4", 00:19:00.909 "traddr": "10.0.0.2", 00:19:00.909 "trsvcid": "4420" 00:19:00.909 }, 00:19:00.909 "peer_address": { 00:19:00.909 "trtype": "TCP", 00:19:00.909 "adrfam": "IPv4", 00:19:00.909 "traddr": "10.0.0.1", 00:19:00.909 "trsvcid": "33990" 00:19:00.909 }, 00:19:00.909 "auth": { 00:19:00.909 "state": "completed", 00:19:00.909 "digest": "sha384", 00:19:00.909 "dhgroup": "ffdhe8192" 00:19:00.909 } 00:19:00.909 } 00:19:00.909 ]' 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.909 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.167 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.735 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.994 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.253 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.253 { 00:19:02.253 "cntlid": 97, 00:19:02.253 "qid": 0, 00:19:02.253 "state": "enabled", 00:19:02.253 "thread": "nvmf_tgt_poll_group_000", 00:19:02.253 "listen_address": { 00:19:02.253 "trtype": "TCP", 00:19:02.253 "adrfam": "IPv4", 00:19:02.253 "traddr": "10.0.0.2", 00:19:02.253 "trsvcid": "4420" 00:19:02.253 }, 00:19:02.253 "peer_address": { 00:19:02.253 "trtype": "TCP", 00:19:02.253 "adrfam": "IPv4", 00:19:02.253 "traddr": "10.0.0.1", 00:19:02.253 "trsvcid": "34020" 00:19:02.253 }, 00:19:02.253 "auth": { 00:19:02.253 "state": "completed", 00:19:02.253 "digest": "sha512", 00:19:02.253 "dhgroup": "null" 00:19:02.253 } 00:19:02.253 } 00:19:02.253 ]' 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.253 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.513 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.513 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.513 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.513 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.513 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.771 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.338 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.597 00:19:03.597 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.597 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.597 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.856 { 00:19:03.856 "cntlid": 99, 00:19:03.856 "qid": 0, 00:19:03.856 "state": "enabled", 00:19:03.856 "thread": "nvmf_tgt_poll_group_000", 00:19:03.856 "listen_address": { 00:19:03.856 "trtype": "TCP", 00:19:03.856 "adrfam": "IPv4", 00:19:03.856 "traddr": "10.0.0.2", 00:19:03.856 "trsvcid": "4420" 00:19:03.856 }, 00:19:03.856 "peer_address": { 00:19:03.856 "trtype": "TCP", 00:19:03.856 "adrfam": "IPv4", 00:19:03.856 "traddr": "10.0.0.1", 00:19:03.856 "trsvcid": "55294" 00:19:03.856 }, 00:19:03.856 "auth": { 00:19:03.856 "state": "completed", 00:19:03.856 "digest": "sha512", 00:19:03.856 "dhgroup": "null" 00:19:03.856 } 00:19:03.856 } 00:19:03.856 ]' 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.856 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.114 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.679 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.938 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.196 00:19:05.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.197 { 00:19:05.197 "cntlid": 101, 00:19:05.197 "qid": 0, 00:19:05.197 "state": "enabled", 00:19:05.197 "thread": "nvmf_tgt_poll_group_000", 00:19:05.197 "listen_address": { 00:19:05.197 "trtype": "TCP", 00:19:05.197 "adrfam": "IPv4", 00:19:05.197 "traddr": "10.0.0.2", 00:19:05.197 "trsvcid": "4420" 00:19:05.197 }, 00:19:05.197 "peer_address": { 00:19:05.197 "trtype": "TCP", 00:19:05.197 "adrfam": "IPv4", 00:19:05.197 "traddr": "10.0.0.1", 00:19:05.197 "trsvcid": "55310" 00:19:05.197 }, 00:19:05.197 "auth": { 00:19:05.197 "state": "completed", 00:19:05.197 "digest": "sha512", 00:19:05.197 "dhgroup": "null" 00:19:05.197 } 00:19:05.197 } 00:19:05.197 ]' 00:19:05.197 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.455 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.280 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.539 00:19:06.539 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.539 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.539 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.798 { 00:19:06.798 "cntlid": 103, 00:19:06.798 "qid": 0, 00:19:06.798 "state": "enabled", 00:19:06.798 "thread": "nvmf_tgt_poll_group_000", 00:19:06.798 "listen_address": { 00:19:06.798 "trtype": "TCP", 00:19:06.798 "adrfam": "IPv4", 00:19:06.798 "traddr": "10.0.0.2", 00:19:06.798 "trsvcid": "4420" 00:19:06.798 }, 00:19:06.798 "peer_address": { 00:19:06.798 "trtype": "TCP", 00:19:06.798 "adrfam": "IPv4", 00:19:06.798 "traddr": "10.0.0.1", 00:19:06.798 "trsvcid": "55340" 00:19:06.798 }, 00:19:06.798 "auth": { 00:19:06.798 "state": "completed", 00:19:06.798 "digest": "sha512", 00:19:06.798 "dhgroup": "null" 00:19:06.798 } 00:19:06.798 } 00:19:06.798 ]' 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.798 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.056 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.056 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.056 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.056 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.630 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.889 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.148 00:19:08.148 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.148 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.148 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.407 { 00:19:08.407 "cntlid": 105, 00:19:08.407 "qid": 0, 00:19:08.407 "state": "enabled", 00:19:08.407 "thread": "nvmf_tgt_poll_group_000", 00:19:08.407 "listen_address": { 00:19:08.407 "trtype": "TCP", 00:19:08.407 "adrfam": "IPv4", 00:19:08.407 "traddr": "10.0.0.2", 00:19:08.407 "trsvcid": "4420" 00:19:08.407 }, 00:19:08.407 "peer_address": { 00:19:08.407 "trtype": "TCP", 00:19:08.407 "adrfam": "IPv4", 00:19:08.407 "traddr": "10.0.0.1", 00:19:08.407 "trsvcid": "55364" 00:19:08.407 }, 00:19:08.407 "auth": { 00:19:08.407 "state": "completed", 00:19:08.407 "digest": "sha512", 00:19:08.407 "dhgroup": "ffdhe2048" 00:19:08.407 } 00:19:08.407 } 00:19:08.407 ]' 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.407 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.665 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.233 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.492 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.755 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.755 { 00:19:09.755 "cntlid": 107, 00:19:09.755 "qid": 0, 00:19:09.755 "state": "enabled", 00:19:09.755 "thread": "nvmf_tgt_poll_group_000", 00:19:09.755 "listen_address": { 00:19:09.755 "trtype": "TCP", 00:19:09.755 "adrfam": "IPv4", 00:19:09.755 "traddr": "10.0.0.2", 00:19:09.755 "trsvcid": "4420" 00:19:09.755 }, 00:19:09.755 "peer_address": { 00:19:09.755 "trtype": "TCP", 00:19:09.755 "adrfam": "IPv4", 00:19:09.755 "traddr": "10.0.0.1", 00:19:09.755 "trsvcid": "55386" 00:19:09.755 }, 00:19:09.755 "auth": { 00:19:09.755 "state": "completed", 00:19:09.755 "digest": "sha512", 00:19:09.755 "dhgroup": "ffdhe2048" 00:19:09.755 } 00:19:09.755 } 00:19:09.755 ]' 00:19:09.755 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.017 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:10.583 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.583 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.583 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.583 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.583 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.842 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.101 00:19:11.101 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.101 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.101 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.360 { 00:19:11.360 "cntlid": 109, 00:19:11.360 "qid": 0, 00:19:11.360 "state": "enabled", 00:19:11.360 "thread": "nvmf_tgt_poll_group_000", 00:19:11.360 "listen_address": { 00:19:11.360 "trtype": "TCP", 00:19:11.360 "adrfam": "IPv4", 00:19:11.360 "traddr": "10.0.0.2", 00:19:11.360 "trsvcid": "4420" 00:19:11.360 }, 00:19:11.360 "peer_address": { 00:19:11.360 "trtype": "TCP", 00:19:11.360 "adrfam": "IPv4", 00:19:11.360 "traddr": "10.0.0.1", 00:19:11.360 "trsvcid": "55424" 00:19:11.360 }, 00:19:11.360 "auth": { 00:19:11.360 "state": "completed", 00:19:11.360 "digest": "sha512", 00:19:11.360 "dhgroup": "ffdhe2048" 00:19:11.360 } 00:19:11.360 } 00:19:11.360 ]' 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.360 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.618 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.186 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.444 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.703 00:19:12.703 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.703 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.703 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.962 { 00:19:12.962 "cntlid": 111, 00:19:12.962 "qid": 0, 00:19:12.962 "state": "enabled", 00:19:12.962 "thread": "nvmf_tgt_poll_group_000", 00:19:12.962 "listen_address": { 00:19:12.962 "trtype": "TCP", 00:19:12.962 "adrfam": "IPv4", 00:19:12.962 "traddr": "10.0.0.2", 00:19:12.962 "trsvcid": "4420" 00:19:12.962 }, 00:19:12.962 "peer_address": { 00:19:12.962 "trtype": "TCP", 00:19:12.962 "adrfam": "IPv4", 00:19:12.962 "traddr": "10.0.0.1", 00:19:12.962 "trsvcid": "55442" 00:19:12.962 }, 00:19:12.962 "auth": { 00:19:12.962 "state": "completed", 00:19:12.962 "digest": "sha512", 00:19:12.962 "dhgroup": "ffdhe2048" 00:19:12.962 } 00:19:12.962 } 00:19:12.962 ]' 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.962 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.221 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.790 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.048 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.048 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.307 { 00:19:14.307 "cntlid": 113, 00:19:14.307 "qid": 0, 00:19:14.307 "state": "enabled", 00:19:14.307 "thread": "nvmf_tgt_poll_group_000", 00:19:14.307 "listen_address": { 00:19:14.307 "trtype": "TCP", 00:19:14.307 "adrfam": "IPv4", 00:19:14.307 "traddr": "10.0.0.2", 00:19:14.307 "trsvcid": "4420" 00:19:14.307 }, 00:19:14.307 "peer_address": { 00:19:14.307 "trtype": "TCP", 00:19:14.307 "adrfam": "IPv4", 00:19:14.307 "traddr": "10.0.0.1", 00:19:14.307 "trsvcid": "44378" 00:19:14.307 }, 00:19:14.307 "auth": { 00:19:14.307 "state": "completed", 00:19:14.307 "digest": "sha512", 00:19:14.307 "dhgroup": "ffdhe3072" 00:19:14.307 } 00:19:14.307 } 00:19:14.307 ]' 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.307 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.566 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.566 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.566 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.566 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.566 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.566 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.134 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.412 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.680 00:19:15.680 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.680 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.680 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.938 { 00:19:15.938 "cntlid": 115, 00:19:15.938 "qid": 0, 00:19:15.938 "state": "enabled", 00:19:15.938 "thread": "nvmf_tgt_poll_group_000", 00:19:15.938 "listen_address": { 00:19:15.938 "trtype": "TCP", 00:19:15.938 "adrfam": "IPv4", 00:19:15.938 "traddr": "10.0.0.2", 00:19:15.938 "trsvcid": "4420" 00:19:15.938 }, 00:19:15.938 "peer_address": { 00:19:15.938 "trtype": "TCP", 00:19:15.938 "adrfam": "IPv4", 00:19:15.938 "traddr": "10.0.0.1", 00:19:15.938 "trsvcid": "44398" 00:19:15.938 }, 00:19:15.938 "auth": { 00:19:15.938 "state": "completed", 00:19:15.938 "digest": "sha512", 00:19:15.938 "dhgroup": "ffdhe3072" 00:19:15.938 } 00:19:15.938 } 00:19:15.938 ]' 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.938 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.196 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.763 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.021 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.022 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.022 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.279 00:19:17.279 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.279 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.279 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.537 { 00:19:17.537 "cntlid": 117, 00:19:17.537 "qid": 0, 00:19:17.537 "state": "enabled", 00:19:17.537 "thread": "nvmf_tgt_poll_group_000", 00:19:17.537 "listen_address": { 00:19:17.537 "trtype": "TCP", 00:19:17.537 "adrfam": "IPv4", 00:19:17.537 "traddr": "10.0.0.2", 00:19:17.537 "trsvcid": "4420" 00:19:17.537 }, 00:19:17.537 "peer_address": { 00:19:17.537 "trtype": "TCP", 00:19:17.537 "adrfam": "IPv4", 00:19:17.537 "traddr": "10.0.0.1", 00:19:17.537 "trsvcid": "44424" 00:19:17.537 }, 00:19:17.537 "auth": { 00:19:17.537 "state": "completed", 00:19:17.537 "digest": "sha512", 00:19:17.537 "dhgroup": "ffdhe3072" 00:19:17.537 } 00:19:17.537 } 00:19:17.537 ]' 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.537 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.537 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.537 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.537 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.795 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.361 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.619 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.619 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.878 { 00:19:18.878 "cntlid": 119, 00:19:18.878 "qid": 0, 00:19:18.878 "state": "enabled", 00:19:18.878 "thread": "nvmf_tgt_poll_group_000", 00:19:18.878 "listen_address": { 00:19:18.878 "trtype": "TCP", 00:19:18.878 "adrfam": "IPv4", 00:19:18.878 "traddr": "10.0.0.2", 00:19:18.878 "trsvcid": "4420" 00:19:18.878 }, 00:19:18.878 "peer_address": { 00:19:18.878 "trtype": "TCP", 00:19:18.878 "adrfam": "IPv4", 00:19:18.878 "traddr": "10.0.0.1", 00:19:18.878 "trsvcid": "44446" 00:19:18.878 }, 00:19:18.878 "auth": { 00:19:18.878 "state": "completed", 00:19:18.878 "digest": "sha512", 00:19:18.878 "dhgroup": "ffdhe3072" 00:19:18.878 } 00:19:18.878 } 00:19:18.878 ]' 00:19:18.878 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.137 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.395 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.964 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.223 00:19:20.483 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.483 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.483 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.483 { 00:19:20.483 "cntlid": 121, 00:19:20.483 "qid": 0, 00:19:20.483 "state": "enabled", 00:19:20.483 "thread": "nvmf_tgt_poll_group_000", 00:19:20.483 "listen_address": { 00:19:20.483 "trtype": "TCP", 00:19:20.483 "adrfam": "IPv4", 00:19:20.483 "traddr": "10.0.0.2", 00:19:20.483 "trsvcid": "4420" 00:19:20.483 }, 00:19:20.483 "peer_address": { 00:19:20.483 "trtype": "TCP", 00:19:20.483 "adrfam": "IPv4", 00:19:20.483 "traddr": "10.0.0.1", 00:19:20.483 "trsvcid": "44482" 00:19:20.483 }, 00:19:20.483 "auth": { 00:19:20.483 "state": "completed", 00:19:20.483 "digest": "sha512", 00:19:20.483 "dhgroup": "ffdhe4096" 00:19:20.483 } 00:19:20.483 } 00:19:20.483 ]' 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.483 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.742 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.742 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.742 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.742 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.742 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.743 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:21.310 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.311 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.311 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.311 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.570 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.570 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.570 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.570 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.570 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.828 00:19:21.829 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.829 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.829 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.088 { 00:19:22.088 "cntlid": 123, 00:19:22.088 "qid": 0, 00:19:22.088 "state": "enabled", 00:19:22.088 "thread": "nvmf_tgt_poll_group_000", 00:19:22.088 "listen_address": { 00:19:22.088 "trtype": "TCP", 00:19:22.088 "adrfam": "IPv4", 00:19:22.088 "traddr": "10.0.0.2", 00:19:22.088 "trsvcid": "4420" 00:19:22.088 }, 00:19:22.088 "peer_address": { 00:19:22.088 "trtype": "TCP", 00:19:22.088 "adrfam": "IPv4", 00:19:22.088 "traddr": "10.0.0.1", 00:19:22.088 "trsvcid": "44506" 00:19:22.088 }, 00:19:22.088 "auth": { 00:19:22.088 "state": "completed", 00:19:22.088 "digest": "sha512", 00:19:22.088 "dhgroup": "ffdhe4096" 00:19:22.088 } 00:19:22.088 } 00:19:22.088 ]' 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.088 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.347 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.347 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.347 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.347 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.914 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.172 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.430 00:19:23.430 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.430 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.430 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.689 { 00:19:23.689 "cntlid": 125, 00:19:23.689 "qid": 0, 00:19:23.689 "state": "enabled", 00:19:23.689 "thread": "nvmf_tgt_poll_group_000", 00:19:23.689 "listen_address": { 00:19:23.689 "trtype": "TCP", 00:19:23.689 "adrfam": "IPv4", 00:19:23.689 "traddr": "10.0.0.2", 00:19:23.689 "trsvcid": "4420" 00:19:23.689 }, 00:19:23.689 "peer_address": { 00:19:23.689 "trtype": "TCP", 00:19:23.689 "adrfam": "IPv4", 00:19:23.689 "traddr": "10.0.0.1", 00:19:23.689 "trsvcid": "46300" 00:19:23.689 }, 00:19:23.689 "auth": { 00:19:23.689 "state": "completed", 00:19:23.689 "digest": "sha512", 00:19:23.689 "dhgroup": "ffdhe4096" 00:19:23.689 } 00:19:23.689 } 00:19:23.689 ]' 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.689 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.690 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.690 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.690 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.690 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.948 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.517 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.776 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.035 00:19:25.035 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.035 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.035 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.294 { 00:19:25.294 "cntlid": 127, 00:19:25.294 "qid": 0, 00:19:25.294 "state": "enabled", 00:19:25.294 "thread": "nvmf_tgt_poll_group_000", 00:19:25.294 "listen_address": { 00:19:25.294 "trtype": "TCP", 00:19:25.294 "adrfam": "IPv4", 00:19:25.294 "traddr": "10.0.0.2", 00:19:25.294 "trsvcid": "4420" 00:19:25.294 }, 00:19:25.294 "peer_address": { 00:19:25.294 "trtype": "TCP", 00:19:25.294 "adrfam": "IPv4", 00:19:25.294 "traddr": "10.0.0.1", 00:19:25.294 "trsvcid": "46316" 00:19:25.294 }, 00:19:25.294 "auth": { 00:19:25.294 "state": "completed", 00:19:25.294 "digest": "sha512", 00:19:25.294 "dhgroup": "ffdhe4096" 00:19:25.294 } 00:19:25.294 } 00:19:25.294 ]' 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.294 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.552 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.119 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.378 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.378 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.378 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.637 00:19:26.637 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.637 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.637 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.896 { 00:19:26.896 "cntlid": 129, 00:19:26.896 "qid": 0, 00:19:26.896 "state": "enabled", 00:19:26.896 "thread": "nvmf_tgt_poll_group_000", 00:19:26.896 "listen_address": { 00:19:26.896 "trtype": "TCP", 00:19:26.896 "adrfam": "IPv4", 00:19:26.896 "traddr": "10.0.0.2", 00:19:26.896 "trsvcid": "4420" 00:19:26.896 }, 00:19:26.896 "peer_address": { 00:19:26.896 "trtype": "TCP", 00:19:26.896 "adrfam": "IPv4", 00:19:26.896 "traddr": "10.0.0.1", 00:19:26.896 "trsvcid": "46344" 00:19:26.896 }, 00:19:26.896 "auth": { 00:19:26.896 "state": "completed", 00:19:26.896 "digest": "sha512", 00:19:26.896 "dhgroup": "ffdhe6144" 00:19:26.896 } 00:19:26.896 } 00:19:26.896 ]' 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.896 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.155 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.722 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.982 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.241 00:19:28.241 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.241 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.242 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.500 { 00:19:28.500 "cntlid": 131, 00:19:28.500 "qid": 0, 00:19:28.500 "state": "enabled", 00:19:28.500 "thread": "nvmf_tgt_poll_group_000", 00:19:28.500 "listen_address": { 00:19:28.500 "trtype": "TCP", 00:19:28.500 "adrfam": "IPv4", 00:19:28.500 "traddr": "10.0.0.2", 00:19:28.500 "trsvcid": "4420" 00:19:28.500 }, 00:19:28.500 "peer_address": { 00:19:28.500 "trtype": "TCP", 00:19:28.500 "adrfam": "IPv4", 00:19:28.500 "traddr": "10.0.0.1", 00:19:28.500 "trsvcid": "46376" 00:19:28.500 }, 00:19:28.500 "auth": { 00:19:28.500 "state": "completed", 00:19:28.500 "digest": "sha512", 00:19:28.500 "dhgroup": "ffdhe6144" 00:19:28.500 } 00:19:28.500 } 00:19:28.500 ]' 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.500 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.500 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.500 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.500 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.759 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.325 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.584 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.851 00:19:29.851 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.851 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.851 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.151 { 00:19:30.151 "cntlid": 133, 00:19:30.151 "qid": 0, 00:19:30.151 "state": "enabled", 00:19:30.151 "thread": "nvmf_tgt_poll_group_000", 00:19:30.151 "listen_address": { 00:19:30.151 "trtype": "TCP", 00:19:30.151 "adrfam": "IPv4", 00:19:30.151 "traddr": "10.0.0.2", 00:19:30.151 "trsvcid": "4420" 00:19:30.151 }, 00:19:30.151 "peer_address": { 00:19:30.151 "trtype": "TCP", 00:19:30.151 "adrfam": "IPv4", 00:19:30.151 "traddr": "10.0.0.1", 00:19:30.151 "trsvcid": "46392" 00:19:30.151 }, 00:19:30.151 "auth": { 00:19:30.151 "state": "completed", 00:19:30.151 "digest": "sha512", 00:19:30.151 "dhgroup": "ffdhe6144" 00:19:30.151 } 00:19:30.151 } 00:19:30.151 ]' 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.151 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.409 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.976 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.234 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.492 00:19:31.492 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.492 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.492 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.750 { 00:19:31.750 "cntlid": 135, 00:19:31.750 "qid": 0, 00:19:31.750 "state": "enabled", 00:19:31.750 "thread": "nvmf_tgt_poll_group_000", 00:19:31.750 "listen_address": { 00:19:31.750 "trtype": "TCP", 00:19:31.750 "adrfam": "IPv4", 00:19:31.750 "traddr": "10.0.0.2", 00:19:31.750 "trsvcid": "4420" 00:19:31.750 }, 00:19:31.750 "peer_address": { 00:19:31.750 "trtype": "TCP", 00:19:31.750 "adrfam": "IPv4", 00:19:31.750 "traddr": "10.0.0.1", 00:19:31.750 "trsvcid": "46424" 00:19:31.750 }, 00:19:31.750 "auth": { 00:19:31.750 "state": "completed", 00:19:31.750 "digest": "sha512", 00:19:31.750 "dhgroup": "ffdhe6144" 00:19:31.750 } 00:19:31.750 } 00:19:31.750 ]' 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.750 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.009 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:32.577 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.577 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.836 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.095 00:19:33.095 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.095 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.096 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.354 { 00:19:33.354 "cntlid": 137, 00:19:33.354 "qid": 0, 00:19:33.354 "state": "enabled", 00:19:33.354 "thread": "nvmf_tgt_poll_group_000", 00:19:33.354 "listen_address": { 00:19:33.354 "trtype": "TCP", 00:19:33.354 "adrfam": "IPv4", 00:19:33.354 "traddr": "10.0.0.2", 00:19:33.354 "trsvcid": "4420" 00:19:33.354 }, 00:19:33.354 "peer_address": { 00:19:33.354 "trtype": "TCP", 00:19:33.354 "adrfam": "IPv4", 00:19:33.354 "traddr": "10.0.0.1", 00:19:33.354 "trsvcid": "46460" 00:19:33.354 }, 00:19:33.354 "auth": { 00:19:33.354 "state": "completed", 00:19:33.354 "digest": "sha512", 00:19:33.354 "dhgroup": "ffdhe8192" 00:19:33.354 } 00:19:33.354 } 00:19:33.354 ]' 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.354 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.613 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.613 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.613 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.613 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.613 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.613 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:34.181 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.181 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.181 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.181 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.439 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.440 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.007 00:19:35.007 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.007 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.007 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.267 { 00:19:35.267 "cntlid": 139, 00:19:35.267 "qid": 0, 00:19:35.267 "state": "enabled", 00:19:35.267 "thread": "nvmf_tgt_poll_group_000", 00:19:35.267 "listen_address": { 00:19:35.267 "trtype": "TCP", 00:19:35.267 "adrfam": "IPv4", 00:19:35.267 "traddr": "10.0.0.2", 00:19:35.267 "trsvcid": "4420" 00:19:35.267 }, 00:19:35.267 "peer_address": { 00:19:35.267 "trtype": "TCP", 00:19:35.267 "adrfam": "IPv4", 00:19:35.267 "traddr": "10.0.0.1", 00:19:35.267 "trsvcid": "50798" 00:19:35.267 }, 00:19:35.267 "auth": { 00:19:35.267 "state": "completed", 00:19:35.267 "digest": "sha512", 00:19:35.267 "dhgroup": "ffdhe8192" 00:19:35.267 } 00:19:35.267 } 00:19:35.267 ]' 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.267 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.527 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTBkODk0NTU1YmY1NTk4MTkwNmFjNDE3NWQzYzliYWIvCv7C: --dhchap-ctrl-secret DHHC-1:02:YmEzYTBmOWY1MWI3ZDExOGViMzkzMGFmNjM0NTczNGJlYjE0ZTc2MDk1NmNkYTViUrtZuQ==: 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.095 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.354 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.612 00:19:36.870 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.870 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.870 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.870 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.871 { 00:19:36.871 "cntlid": 141, 00:19:36.871 "qid": 0, 00:19:36.871 "state": "enabled", 00:19:36.871 "thread": "nvmf_tgt_poll_group_000", 00:19:36.871 "listen_address": { 00:19:36.871 "trtype": "TCP", 00:19:36.871 "adrfam": "IPv4", 00:19:36.871 "traddr": "10.0.0.2", 00:19:36.871 "trsvcid": "4420" 00:19:36.871 }, 00:19:36.871 "peer_address": { 00:19:36.871 "trtype": "TCP", 00:19:36.871 "adrfam": "IPv4", 00:19:36.871 "traddr": "10.0.0.1", 00:19:36.871 "trsvcid": "50824" 00:19:36.871 }, 00:19:36.871 "auth": { 00:19:36.871 "state": "completed", 00:19:36.871 "digest": "sha512", 00:19:36.871 "dhgroup": "ffdhe8192" 00:19:36.871 } 00:19:36.871 } 00:19:36.871 ]' 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.871 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.130 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.130 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.130 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.130 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.130 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.388 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTQzNmU1ODY3MTQ4NTE2YjIwMGQ2ZWVmNTdjZGE1N2QxOWE5MWFlYjc2MjQ5ODI2JNmRzw==: --dhchap-ctrl-secret DHHC-1:01:YzFlODYxNzU0MWM5MjkyMDI4NDEwOGNlNWI0NzA5MTlCdB8M: 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.956 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.524 00:19:38.524 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.524 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.524 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.782 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.783 { 00:19:38.783 "cntlid": 143, 00:19:38.783 "qid": 0, 00:19:38.783 "state": "enabled", 00:19:38.783 "thread": "nvmf_tgt_poll_group_000", 00:19:38.783 "listen_address": { 00:19:38.783 "trtype": "TCP", 00:19:38.783 "adrfam": "IPv4", 00:19:38.783 "traddr": "10.0.0.2", 00:19:38.783 "trsvcid": "4420" 00:19:38.783 }, 00:19:38.783 "peer_address": { 00:19:38.783 "trtype": "TCP", 00:19:38.783 "adrfam": "IPv4", 00:19:38.783 "traddr": "10.0.0.1", 00:19:38.783 "trsvcid": "50842" 00:19:38.783 }, 00:19:38.783 "auth": { 00:19:38.783 "state": "completed", 00:19:38.783 "digest": "sha512", 00:19:38.783 "dhgroup": "ffdhe8192" 00:19:38.783 } 00:19:38.783 } 00:19:38.783 ]' 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.783 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.041 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.607 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.864 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.429 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.429 { 00:19:40.429 "cntlid": 145, 00:19:40.429 "qid": 0, 00:19:40.429 "state": "enabled", 00:19:40.429 "thread": "nvmf_tgt_poll_group_000", 00:19:40.429 "listen_address": { 00:19:40.429 "trtype": "TCP", 00:19:40.429 "adrfam": "IPv4", 00:19:40.429 "traddr": "10.0.0.2", 00:19:40.429 "trsvcid": "4420" 00:19:40.429 }, 00:19:40.429 "peer_address": { 00:19:40.429 "trtype": "TCP", 00:19:40.429 "adrfam": "IPv4", 00:19:40.429 "traddr": "10.0.0.1", 00:19:40.429 "trsvcid": "50880" 00:19:40.429 }, 00:19:40.429 "auth": { 00:19:40.429 "state": "completed", 00:19:40.429 "digest": "sha512", 00:19:40.429 "dhgroup": "ffdhe8192" 00:19:40.429 } 00:19:40.429 } 00:19:40.429 ]' 00:19:40.429 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.687 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.945 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZmVhYzA4NTA5YWU1MmEyY2EyYzliYTc2NjFjNDBjN2EyOTBiNDE5Y2Q3Mjc1NzVm13nxnA==: --dhchap-ctrl-secret DHHC-1:03:ZmI4YjM0OGYyMWNlMzAzYmU4OWJlZmNjNWE2YzRjOGY2ZGU1Y2ZjZTUyYTMxMDU0Njc2MmE0MmRjNDA3YzUxZLdgC9g=: 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.511 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:41.770 request: 00:19:41.770 { 00:19:41.770 "name": "nvme0", 00:19:41.770 "trtype": "tcp", 00:19:41.770 "traddr": "10.0.0.2", 00:19:41.770 "adrfam": "ipv4", 00:19:41.770 "trsvcid": "4420", 00:19:41.770 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:41.770 "prchk_reftag": false, 00:19:41.770 "prchk_guard": false, 00:19:41.770 "hdgst": false, 00:19:41.770 "ddgst": false, 00:19:41.770 "dhchap_key": "key2", 00:19:41.770 "method": "bdev_nvme_attach_controller", 00:19:41.770 "req_id": 1 00:19:41.770 } 00:19:41.770 Got JSON-RPC error response 00:19:41.770 response: 00:19:41.770 { 00:19:41.770 "code": -5, 00:19:41.770 "message": "Input/output error" 00:19:41.770 } 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.770 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:42.338 request: 00:19:42.338 { 00:19:42.338 "name": "nvme0", 00:19:42.338 "trtype": "tcp", 00:19:42.338 "traddr": "10.0.0.2", 00:19:42.338 "adrfam": "ipv4", 00:19:42.338 "trsvcid": "4420", 00:19:42.338 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:42.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:42.338 "prchk_reftag": false, 00:19:42.338 "prchk_guard": false, 00:19:42.338 "hdgst": false, 00:19:42.338 "ddgst": false, 00:19:42.338 "dhchap_key": "key1", 00:19:42.338 "dhchap_ctrlr_key": "ckey2", 00:19:42.338 "method": "bdev_nvme_attach_controller", 00:19:42.338 "req_id": 1 00:19:42.338 } 00:19:42.338 Got JSON-RPC error response 00:19:42.338 response: 00:19:42.338 { 00:19:42.338 "code": -5, 00:19:42.338 "message": "Input/output error" 00:19:42.338 } 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.338 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.906 request: 00:19:42.906 { 00:19:42.906 "name": "nvme0", 00:19:42.906 "trtype": "tcp", 00:19:42.906 "traddr": "10.0.0.2", 00:19:42.906 "adrfam": "ipv4", 00:19:42.906 "trsvcid": "4420", 00:19:42.906 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:42.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:42.906 "prchk_reftag": false, 00:19:42.906 "prchk_guard": false, 00:19:42.906 "hdgst": false, 00:19:42.906 "ddgst": false, 00:19:42.906 "dhchap_key": "key1", 00:19:42.906 "dhchap_ctrlr_key": "ckey1", 00:19:42.906 "method": "bdev_nvme_attach_controller", 00:19:42.906 "req_id": 1 00:19:42.906 } 00:19:42.906 Got JSON-RPC error response 00:19:42.906 response: 00:19:42.906 { 00:19:42.906 "code": -5, 00:19:42.906 "message": "Input/output error" 00:19:42.906 } 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 601056 ']' 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601056' 00:19:42.906 killing process with pid 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 601056 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=622235 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:42.906 11:30:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 622235 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 622235 ']' 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.907 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.843 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 622235 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 622235 ']' 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.844 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.102 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.670 00:19:44.670 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.670 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.670 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.929 { 00:19:44.929 "cntlid": 1, 00:19:44.929 "qid": 0, 00:19:44.929 "state": "enabled", 00:19:44.929 "thread": "nvmf_tgt_poll_group_000", 00:19:44.929 "listen_address": { 00:19:44.929 "trtype": "TCP", 00:19:44.929 "adrfam": "IPv4", 00:19:44.929 "traddr": "10.0.0.2", 00:19:44.929 "trsvcid": "4420" 00:19:44.929 }, 00:19:44.929 "peer_address": { 00:19:44.929 "trtype": "TCP", 00:19:44.929 "adrfam": "IPv4", 00:19:44.929 "traddr": "10.0.0.1", 00:19:44.929 "trsvcid": "33908" 00:19:44.929 }, 00:19:44.929 "auth": { 00:19:44.929 "state": "completed", 00:19:44.929 "digest": "sha512", 00:19:44.929 "dhgroup": "ffdhe8192" 00:19:44.929 } 00:19:44.929 } 00:19:44.929 ]' 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.929 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.188 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGEzZGI5ZTI5ODA5MzRmY2MyZmJiZDIzNWI0MWE5M2EwNTI1NDcxMjdjOTU4YTU5NTJlN2MyZTdlZDJiYTlhNhU+q7s=: 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:45.755 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.014 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.273 request: 00:19:46.273 { 00:19:46.273 "name": "nvme0", 00:19:46.273 "trtype": "tcp", 00:19:46.273 "traddr": "10.0.0.2", 00:19:46.273 "adrfam": "ipv4", 00:19:46.273 "trsvcid": "4420", 00:19:46.273 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:46.273 "prchk_reftag": false, 00:19:46.273 "prchk_guard": false, 00:19:46.273 "hdgst": false, 00:19:46.273 "ddgst": false, 00:19:46.273 "dhchap_key": "key3", 00:19:46.273 "method": "bdev_nvme_attach_controller", 00:19:46.273 "req_id": 1 00:19:46.273 } 00:19:46.273 Got JSON-RPC error response 00:19:46.273 response: 00:19:46.273 { 00:19:46.273 "code": -5, 00:19:46.273 "message": "Input/output error" 00:19:46.273 } 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.273 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.274 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.533 request: 00:19:46.533 { 00:19:46.533 "name": "nvme0", 00:19:46.533 "trtype": "tcp", 00:19:46.533 "traddr": "10.0.0.2", 00:19:46.533 "adrfam": "ipv4", 00:19:46.533 "trsvcid": "4420", 00:19:46.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:46.533 "prchk_reftag": false, 00:19:46.533 "prchk_guard": false, 00:19:46.533 "hdgst": false, 00:19:46.533 "ddgst": false, 00:19:46.533 "dhchap_key": "key3", 00:19:46.533 "method": "bdev_nvme_attach_controller", 00:19:46.533 "req_id": 1 00:19:46.533 } 00:19:46.533 Got JSON-RPC error response 00:19:46.533 response: 00:19:46.533 { 00:19:46.533 "code": -5, 00:19:46.533 "message": "Input/output error" 00:19:46.533 } 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.533 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.791 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:47.048 request: 00:19:47.049 { 00:19:47.049 "name": "nvme0", 00:19:47.049 "trtype": "tcp", 00:19:47.049 "traddr": "10.0.0.2", 00:19:47.049 "adrfam": "ipv4", 00:19:47.049 "trsvcid": "4420", 00:19:47.049 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:47.049 "prchk_reftag": false, 00:19:47.049 "prchk_guard": false, 00:19:47.049 "hdgst": false, 00:19:47.049 "ddgst": false, 00:19:47.049 "dhchap_key": "key0", 00:19:47.049 "dhchap_ctrlr_key": "key1", 00:19:47.049 "method": "bdev_nvme_attach_controller", 00:19:47.049 "req_id": 1 00:19:47.049 } 00:19:47.049 Got JSON-RPC error response 00:19:47.049 response: 00:19:47.049 { 00:19:47.049 "code": -5, 00:19:47.049 "message": "Input/output error" 00:19:47.049 } 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:47.049 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:47.308 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.308 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.612 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:47.612 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 601134 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 601134 ']' 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 601134 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601134 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601134' 00:19:47.613 killing process with pid 601134 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 601134 00:19:47.613 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 601134 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.871 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.871 rmmod nvme_tcp 00:19:47.871 rmmod nvme_fabrics 00:19:47.871 rmmod nvme_keyring 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 622235 ']' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 622235 ']' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622235' 00:19:48.130 killing process with pid 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 622235 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.130 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.665 11:30:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.665 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NIt /tmp/spdk.key-sha256.8vK /tmp/spdk.key-sha384.sPN /tmp/spdk.key-sha512.lBL /tmp/spdk.key-sha512.OrK /tmp/spdk.key-sha384.IgK /tmp/spdk.key-sha256.g3A '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:50.665 00:19:50.665 real 2m13.047s 00:19:50.665 user 5m5.548s 00:19:50.665 sys 0m20.750s 00:19:50.665 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.665 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 ************************************ 00:19:50.665 END TEST nvmf_auth_target 00:19:50.665 ************************************ 00:19:50.665 11:30:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:50.665 11:30:33 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:50.665 11:30:33 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.665 11:30:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:50.665 11:30:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.665 11:30:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 ************************************ 00:19:50.665 START TEST nvmf_bdevio_no_huge 00:19:50.665 ************************************ 00:19:50.665 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:50.665 * Looking for test storage... 00:19:50.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.666 11:30:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:55.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:55.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:55.942 Found net devices under 0000:86:00.0: cvl_0_0 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:55.942 Found net devices under 0000:86:00.1: cvl_0_1 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.942 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:56.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:19:56.201 00:19:56.201 --- 10.0.0.2 ping statistics --- 00:19:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.201 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:19:56.201 00:19:56.201 --- 10.0.0.1 ping statistics --- 00:19:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.201 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=626532 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 626532 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 626532 ']' 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.201 11:30:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.201 [2024-07-15 11:30:39.713788] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:56.201 [2024-07-15 11:30:39.713834] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:56.201 [2024-07-15 11:30:39.788656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.459 [2024-07-15 11:30:39.874065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.459 [2024-07-15 11:30:39.874098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.459 [2024-07-15 11:30:39.874104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.459 [2024-07-15 11:30:39.874110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.459 [2024-07-15 11:30:39.874115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.459 [2024-07-15 11:30:39.874297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.459 [2024-07-15 11:30:39.874334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:56.459 [2024-07-15 11:30:39.874444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.459 [2024-07-15 11:30:39.874446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.025 [2024-07-15 11:30:40.561560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.025 Malloc0 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.025 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.026 [2024-07-15 11:30:40.605797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.026 { 00:19:57.026 "params": { 00:19:57.026 "name": "Nvme$subsystem", 00:19:57.026 "trtype": "$TEST_TRANSPORT", 00:19:57.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.026 "adrfam": "ipv4", 00:19:57.026 "trsvcid": "$NVMF_PORT", 00:19:57.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.026 "hdgst": ${hdgst:-false}, 00:19:57.026 "ddgst": ${ddgst:-false} 00:19:57.026 }, 00:19:57.026 "method": "bdev_nvme_attach_controller" 00:19:57.026 } 00:19:57.026 EOF 00:19:57.026 )") 00:19:57.026 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:57.284 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:57.284 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:57.284 11:30:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.284 "params": { 00:19:57.284 "name": "Nvme1", 00:19:57.284 "trtype": "tcp", 00:19:57.284 "traddr": "10.0.0.2", 00:19:57.284 "adrfam": "ipv4", 00:19:57.284 "trsvcid": "4420", 00:19:57.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.284 "hdgst": false, 00:19:57.284 "ddgst": false 00:19:57.284 }, 00:19:57.284 "method": "bdev_nvme_attach_controller" 00:19:57.284 }' 00:19:57.284 [2024-07-15 11:30:40.654382] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:57.284 [2024-07-15 11:30:40.654434] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid626766 ] 00:19:57.284 [2024-07-15 11:30:40.726585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.284 [2024-07-15 11:30:40.812938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.284 [2024-07-15 11:30:40.813044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.284 [2024-07-15 11:30:40.813044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.542 I/O targets: 00:19:57.542 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:57.542 00:19:57.542 00:19:57.542 CUnit - A unit testing framework for C - Version 2.1-3 00:19:57.542 http://cunit.sourceforge.net/ 00:19:57.542 00:19:57.542 00:19:57.542 Suite: bdevio tests on: Nvme1n1 00:19:57.542 Test: blockdev write read block ...passed 00:19:57.800 Test: blockdev write zeroes read block ...passed 00:19:57.800 Test: blockdev write zeroes read no split ...passed 00:19:57.800 Test: blockdev write zeroes read split ...passed 00:19:57.800 Test: blockdev write zeroes read split partial ...passed 00:19:57.800 Test: blockdev reset ...[2024-07-15 11:30:41.201713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.800 [2024-07-15 11:30:41.201776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65f300 (9): Bad file descriptor 00:19:57.800 [2024-07-15 11:30:41.213804] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:57.800 passed 00:19:57.800 Test: blockdev write read 8 blocks ...passed 00:19:57.800 Test: blockdev write read size > 128k ...passed 00:19:57.800 Test: blockdev write read invalid size ...passed 00:19:57.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:57.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:57.800 Test: blockdev write read max offset ...passed 00:19:57.800 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:57.800 Test: blockdev writev readv 8 blocks ...passed 00:19:57.800 Test: blockdev writev readv 30 x 1block ...passed 00:19:58.058 Test: blockdev writev readv block ...passed 00:19:58.058 Test: blockdev writev readv size > 128k ...passed 00:19:58.058 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:58.058 Test: blockdev comparev and writev ...[2024-07-15 11:30:41.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.428763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.428777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.428785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.429617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.058 [2024-07-15 11:30:41.429625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:58.058 passed 00:19:58.058 Test: blockdev nvme passthru rw ...passed 00:19:58.058 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:30:41.512550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.058 [2024-07-15 11:30:41.512567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:58.058 [2024-07-15 11:30:41.512696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.059 [2024-07-15 11:30:41.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:58.059 [2024-07-15 11:30:41.512825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.059 [2024-07-15 11:30:41.512836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:58.059 [2024-07-15 11:30:41.512955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.059 [2024-07-15 11:30:41.512965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:58.059 passed 00:19:58.059 Test: blockdev nvme admin passthru ...passed 00:19:58.059 Test: blockdev copy ...passed 00:19:58.059 00:19:58.059 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.059 suites 1 1 n/a 0 0 00:19:58.059 tests 23 23 23 0 0 00:19:58.059 asserts 152 152 152 0 n/a 00:19:58.059 00:19:58.059 Elapsed time = 1.008 seconds 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.317 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.317 rmmod nvme_tcp 00:19:58.317 rmmod nvme_fabrics 00:19:58.317 rmmod nvme_keyring 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 626532 ']' 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 626532 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 626532 ']' 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 626532 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 626532 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 626532' 00:19:58.576 killing process with pid 626532 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 626532 00:19:58.576 11:30:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 626532 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.835 11:30:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.372 11:30:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.372 00:20:01.372 real 0m10.518s 00:20:01.372 user 0m13.211s 00:20:01.372 sys 0m5.195s 00:20:01.372 11:30:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.372 11:30:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 ************************************ 00:20:01.372 END TEST nvmf_bdevio_no_huge 00:20:01.372 ************************************ 00:20:01.372 11:30:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:01.372 11:30:44 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.372 11:30:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.372 11:30:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.372 11:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 ************************************ 00:20:01.372 START TEST nvmf_tls 00:20:01.372 ************************************ 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.372 * Looking for test storage... 00:20:01.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.372 11:30:44 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.373 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.650 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.650 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.650 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.651 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:20:06.910 00:20:06.910 --- 10.0.0.2 ping statistics --- 00:20:06.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.910 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:20:06.910 00:20:06.910 --- 10.0.0.1 ping statistics --- 00:20:06.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.910 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=630509 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 630509 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 630509 ']' 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.910 11:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.910 [2024-07-15 11:30:50.355057] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:06.910 [2024-07-15 11:30:50.355100] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.910 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.910 [2024-07-15 11:30:50.426403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.169 [2024-07-15 11:30:50.504967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.169 [2024-07-15 11:30:50.504999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.169 [2024-07-15 11:30:50.505005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.169 [2024-07-15 11:30:50.505011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.169 [2024-07-15 11:30:50.505021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.169 [2024-07-15 11:30:50.505038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.736 11:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.736 11:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:07.736 11:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.736 11:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:07.737 11:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.737 11:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.737 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:07.737 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:07.995 true 00:20:07.995 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.995 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:07.995 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:07.995 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:07.995 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:08.254 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.254 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:08.512 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:08.512 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:08.512 11:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:08.512 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.512 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:08.771 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:08.771 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:08.771 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.771 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:09.030 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:09.030 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:09.030 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:09.030 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:09.030 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.289 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:09.289 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:09.289 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:09.548 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.548 11:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:09.548 11:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.iOBRxmzdmu 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.v5WC2zJplZ 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.iOBRxmzdmu 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.v5WC2zJplZ 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.806 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:10.064 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.iOBRxmzdmu 00:20:10.064 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iOBRxmzdmu 00:20:10.064 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.322 [2024-07-15 11:30:53.741909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.323 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.581 11:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.581 [2024-07-15 11:30:54.070752] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.581 [2024-07-15 11:30:54.070934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.581 11:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.840 malloc0 00:20:10.840 11:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.840 11:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iOBRxmzdmu 00:20:11.134 [2024-07-15 11:30:54.580456] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.134 11:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iOBRxmzdmu 00:20:11.134 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.343 Initializing NVMe Controllers 00:20:23.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.343 Initialization complete. Launching workers. 00:20:23.343 ======================================================== 00:20:23.343 Latency(us) 00:20:23.343 Device Information : IOPS MiB/s Average min max 00:20:23.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16528.68 64.57 3872.50 778.68 6808.73 00:20:23.343 ======================================================== 00:20:23.343 Total : 16528.68 64.57 3872.50 778.68 6808.73 00:20:23.343 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iOBRxmzdmu 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iOBRxmzdmu' 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632869 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632869 /var/tmp/bdevperf.sock 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 632869 ']' 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.343 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.343 [2024-07-15 11:31:04.759164] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:23.343 [2024-07-15 11:31:04.759212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632869 ] 00:20:23.343 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.343 [2024-07-15 11:31:04.827039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.343 [2024-07-15 11:31:04.905947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.343 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.343 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:23.343 11:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iOBRxmzdmu 00:20:23.343 [2024-07-15 11:31:05.732681] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.343 [2024-07-15 11:31:05.732749] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:23.343 TLSTESTn1 00:20:23.343 11:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.343 Running I/O for 10 seconds... 00:20:33.317 00:20:33.317 Latency(us) 00:20:33.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.317 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.317 Verification LBA range: start 0x0 length 0x2000 00:20:33.317 TLSTESTn1 : 10.02 3399.86 13.28 0.00 0.00 37592.73 6610.59 238892.97 00:20:33.317 =================================================================================================================== 00:20:33.317 Total : 3399.86 13.28 0.00 0.00 37592.73 6610.59 238892.97 00:20:33.317 0 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 632869 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 632869 ']' 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 632869 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.317 11:31:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 632869 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 632869' 00:20:33.317 killing process with pid 632869 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 632869 00:20:33.317 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.317 00:20:33.317 Latency(us) 00:20:33.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.317 =================================================================================================================== 00:20:33.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.317 [2024-07-15 11:31:16.020421] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 632869 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v5WC2zJplZ 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v5WC2zJplZ 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v5WC2zJplZ 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v5WC2zJplZ' 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=634704 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 634704 /var/tmp/bdevperf.sock 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 634704 ']' 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.317 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.317 [2024-07-15 11:31:16.248337] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:33.317 [2024-07-15 11:31:16.248387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634704 ] 00:20:33.317 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.317 [2024-07-15 11:31:16.316457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.317 [2024-07-15 11:31:16.388079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.576 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.576 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:33.576 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v5WC2zJplZ 00:20:33.836 [2024-07-15 11:31:17.206881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.836 [2024-07-15 11:31:17.206951] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.836 [2024-07-15 11:31:17.217613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:33.836 [2024-07-15 11:31:17.218135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199a570 (107): Transport endpoint is not connected 00:20:33.836 [2024-07-15 11:31:17.219128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199a570 (9): Bad file descriptor 00:20:33.836 [2024-07-15 11:31:17.220129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:33.836 [2024-07-15 11:31:17.220139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:33.836 [2024-07-15 11:31:17.220148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:33.836 request: 00:20:33.836 { 00:20:33.836 "name": "TLSTEST", 00:20:33.836 "trtype": "tcp", 00:20:33.836 "traddr": "10.0.0.2", 00:20:33.836 "adrfam": "ipv4", 00:20:33.836 "trsvcid": "4420", 00:20:33.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.836 "prchk_reftag": false, 00:20:33.836 "prchk_guard": false, 00:20:33.836 "hdgst": false, 00:20:33.836 "ddgst": false, 00:20:33.836 "psk": "/tmp/tmp.v5WC2zJplZ", 00:20:33.836 "method": "bdev_nvme_attach_controller", 00:20:33.836 "req_id": 1 00:20:33.836 } 00:20:33.836 Got JSON-RPC error response 00:20:33.836 response: 00:20:33.836 { 00:20:33.836 "code": -5, 00:20:33.836 "message": "Input/output error" 00:20:33.836 } 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 634704 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 634704 ']' 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 634704 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634704 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634704' 00:20:33.836 killing process with pid 634704 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 634704 00:20:33.836 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.836 00:20:33.836 Latency(us) 00:20:33.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.836 =================================================================================================================== 00:20:33.836 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.836 [2024-07-15 11:31:17.294370] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.836 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 634704 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iOBRxmzdmu 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iOBRxmzdmu 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iOBRxmzdmu 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iOBRxmzdmu' 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=634944 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 634944 /var/tmp/bdevperf.sock 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 634944 ']' 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.095 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.095 [2024-07-15 11:31:17.508718] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:34.095 [2024-07-15 11:31:17.508767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634944 ] 00:20:34.095 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.095 [2024-07-15 11:31:17.570309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.095 [2024-07-15 11:31:17.649355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.030 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.030 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.030 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.iOBRxmzdmu 00:20:35.030 [2024-07-15 11:31:18.479365] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.031 [2024-07-15 11:31:18.479428] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.031 [2024-07-15 11:31:18.490733] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.031 [2024-07-15 11:31:18.490755] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.031 [2024-07-15 11:31:18.490780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.031 [2024-07-15 11:31:18.491558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dd570 (107): Transport endpoint is not connected 00:20:35.031 [2024-07-15 11:31:18.492551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dd570 (9): Bad file descriptor 00:20:35.031 [2024-07-15 11:31:18.493553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.031 [2024-07-15 11:31:18.493563] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.031 [2024-07-15 11:31:18.493572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.031 request: 00:20:35.031 { 00:20:35.031 "name": "TLSTEST", 00:20:35.031 "trtype": "tcp", 00:20:35.031 "traddr": "10.0.0.2", 00:20:35.031 "adrfam": "ipv4", 00:20:35.031 "trsvcid": "4420", 00:20:35.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.031 "prchk_reftag": false, 00:20:35.031 "prchk_guard": false, 00:20:35.031 "hdgst": false, 00:20:35.031 "ddgst": false, 00:20:35.031 "psk": "/tmp/tmp.iOBRxmzdmu", 00:20:35.031 "method": "bdev_nvme_attach_controller", 00:20:35.031 "req_id": 1 00:20:35.031 } 00:20:35.031 Got JSON-RPC error response 00:20:35.031 response: 00:20:35.031 { 00:20:35.031 "code": -5, 00:20:35.031 "message": "Input/output error" 00:20:35.031 } 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 634944 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 634944 ']' 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 634944 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634944 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634944' 00:20:35.031 killing process with pid 634944 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 634944 00:20:35.031 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.031 00:20:35.031 Latency(us) 00:20:35.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.031 =================================================================================================================== 00:20:35.031 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.031 [2024-07-15 11:31:18.568414] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.031 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 634944 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iOBRxmzdmu 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iOBRxmzdmu 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iOBRxmzdmu 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iOBRxmzdmu' 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=635176 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 635176 /var/tmp/bdevperf.sock 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 635176 ']' 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.290 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.290 [2024-07-15 11:31:18.793593] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:35.290 [2024-07-15 11:31:18.793636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635176 ] 00:20:35.290 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.290 [2024-07-15 11:31:18.854078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.548 [2024-07-15 11:31:18.922666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.113 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.113 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.113 11:31:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iOBRxmzdmu 00:20:36.372 [2024-07-15 11:31:19.772480] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.372 [2024-07-15 11:31:19.772552] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.372 [2024-07-15 11:31:19.784015] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.372 [2024-07-15 11:31:19.784036] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.372 [2024-07-15 11:31:19.784059] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.372 [2024-07-15 11:31:19.784873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x822570 (107): Transport endpoint is not connected 00:20:36.372 [2024-07-15 11:31:19.785866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x822570 (9): Bad file descriptor 00:20:36.372 [2024-07-15 11:31:19.786868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:36.372 [2024-07-15 11:31:19.786877] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.372 [2024-07-15 11:31:19.786886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:36.372 request: 00:20:36.372 { 00:20:36.372 "name": "TLSTEST", 00:20:36.372 "trtype": "tcp", 00:20:36.372 "traddr": "10.0.0.2", 00:20:36.372 "adrfam": "ipv4", 00:20:36.372 "trsvcid": "4420", 00:20:36.372 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.372 "prchk_reftag": false, 00:20:36.372 "prchk_guard": false, 00:20:36.372 "hdgst": false, 00:20:36.372 "ddgst": false, 00:20:36.372 "psk": "/tmp/tmp.iOBRxmzdmu", 00:20:36.372 "method": "bdev_nvme_attach_controller", 00:20:36.372 "req_id": 1 00:20:36.372 } 00:20:36.372 Got JSON-RPC error response 00:20:36.372 response: 00:20:36.372 { 00:20:36.372 "code": -5, 00:20:36.372 "message": "Input/output error" 00:20:36.372 } 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 635176 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 635176 ']' 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 635176 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635176 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635176' 00:20:36.372 killing process with pid 635176 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 635176 00:20:36.372 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.372 00:20:36.372 Latency(us) 00:20:36.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.372 =================================================================================================================== 00:20:36.372 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.372 [2024-07-15 11:31:19.860990] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.372 11:31:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 635176 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=635418 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 635418 /var/tmp/bdevperf.sock 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 635418 ']' 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.630 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 [2024-07-15 11:31:20.086174] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:36.630 [2024-07-15 11:31:20.086219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635418 ] 00:20:36.630 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.630 [2024-07-15 11:31:20.150960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.888 [2024-07-15 11:31:20.222263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.455 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.455 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.455 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.714 [2024-07-15 11:31:21.067350] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.714 [2024-07-15 11:31:21.069497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07af0 (9): Bad file descriptor 00:20:37.714 [2024-07-15 11:31:21.070495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.714 [2024-07-15 11:31:21.070507] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.714 [2024-07-15 11:31:21.070515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.714 request: 00:20:37.714 { 00:20:37.714 "name": "TLSTEST", 00:20:37.714 "trtype": "tcp", 00:20:37.714 "traddr": "10.0.0.2", 00:20:37.714 "adrfam": "ipv4", 00:20:37.714 "trsvcid": "4420", 00:20:37.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.714 "prchk_reftag": false, 00:20:37.714 "prchk_guard": false, 00:20:37.714 "hdgst": false, 00:20:37.714 "ddgst": false, 00:20:37.714 "method": "bdev_nvme_attach_controller", 00:20:37.714 "req_id": 1 00:20:37.714 } 00:20:37.714 Got JSON-RPC error response 00:20:37.714 response: 00:20:37.714 { 00:20:37.714 "code": -5, 00:20:37.714 "message": "Input/output error" 00:20:37.714 } 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 635418 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 635418 ']' 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 635418 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635418 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635418' 00:20:37.714 killing process with pid 635418 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 635418 00:20:37.714 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.714 00:20:37.714 Latency(us) 00:20:37.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.714 =================================================================================================================== 00:20:37.714 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.714 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 635418 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 630509 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 630509 ']' 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 630509 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 630509 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 630509' 00:20:37.972 killing process with pid 630509 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 630509 00:20:37.972 [2024-07-15 11:31:21.363553] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 630509 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.972 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.973 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:37.973 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:37.973 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:37.973 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.WUYqWABvAG 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.WUYqWABvAG 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=635666 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 635666 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 635666 ']' 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.231 11:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.231 [2024-07-15 11:31:21.659392] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:38.231 [2024-07-15 11:31:21.659439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.231 [2024-07-15 11:31:21.722850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.231 [2024-07-15 11:31:21.801475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.231 [2024-07-15 11:31:21.801510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.231 [2024-07-15 11:31:21.801518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.231 [2024-07-15 11:31:21.801524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.231 [2024-07-15 11:31:21.801530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.231 [2024-07-15 11:31:21.801546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WUYqWABvAG 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.166 [2024-07-15 11:31:22.652942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.166 11:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.425 11:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.425 [2024-07-15 11:31:22.997816] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.425 [2024-07-15 11:31:22.998002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.425 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.684 malloc0 00:20:39.684 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:39.943 [2024-07-15 11:31:23.487097] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUYqWABvAG 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WUYqWABvAG' 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=635929 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 635929 /var/tmp/bdevperf.sock 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 635929 ']' 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.943 11:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.943 [2024-07-15 11:31:23.529058] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:39.943 [2024-07-15 11:31:23.529103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635929 ] 00:20:40.202 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.202 [2024-07-15 11:31:23.597834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.202 [2024-07-15 11:31:23.670187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.136 11:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.136 11:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.136 11:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:41.136 [2024-07-15 11:31:24.520885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.136 [2024-07-15 11:31:24.520953] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.136 TLSTESTn1 00:20:41.136 11:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.136 Running I/O for 10 seconds... 00:20:51.211 00:20:51.211 Latency(us) 00:20:51.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.211 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.211 Verification LBA range: start 0x0 length 0x2000 00:20:51.211 TLSTESTn1 : 10.04 4521.98 17.66 0.00 0.00 28240.24 5755.77 52884.70 00:20:51.211 =================================================================================================================== 00:20:51.211 Total : 4521.98 17.66 0.00 0.00 28240.24 5755.77 52884.70 00:20:51.211 0 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 635929 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 635929 ']' 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 635929 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.211 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635929 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635929' 00:20:51.471 killing process with pid 635929 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 635929 00:20:51.471 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.471 00:20:51.471 Latency(us) 00:20:51.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.471 =================================================================================================================== 00:20:51.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.471 [2024-07-15 11:31:34.805680] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 635929 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.WUYqWABvAG 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUYqWABvAG 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUYqWABvAG 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUYqWABvAG 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WUYqWABvAG' 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=637793 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 637793 /var/tmp/bdevperf.sock 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 637793 ']' 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.471 11:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.471 [2024-07-15 11:31:35.039214] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:51.471 [2024-07-15 11:31:35.039281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637793 ] 00:20:51.730 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.730 [2024-07-15 11:31:35.106705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.730 [2024-07-15 11:31:35.185855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.297 11:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.297 11:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.297 11:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:52.556 [2024-07-15 11:31:36.000863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.556 [2024-07-15 11:31:36.000912] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:52.556 [2024-07-15 11:31:36.000919] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.WUYqWABvAG 00:20:52.556 request: 00:20:52.556 { 00:20:52.556 "name": "TLSTEST", 00:20:52.556 "trtype": "tcp", 00:20:52.556 "traddr": "10.0.0.2", 00:20:52.556 "adrfam": "ipv4", 00:20:52.556 "trsvcid": "4420", 00:20:52.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.557 "prchk_reftag": false, 00:20:52.557 "prchk_guard": false, 00:20:52.557 "hdgst": false, 00:20:52.557 "ddgst": false, 00:20:52.557 "psk": "/tmp/tmp.WUYqWABvAG", 00:20:52.557 "method": "bdev_nvme_attach_controller", 00:20:52.557 "req_id": 1 00:20:52.557 } 00:20:52.557 Got JSON-RPC error response 00:20:52.557 response: 00:20:52.557 { 00:20:52.557 "code": -1, 00:20:52.557 "message": "Operation not permitted" 00:20:52.557 } 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 637793 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 637793 ']' 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 637793 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 637793 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 637793' 00:20:52.557 killing process with pid 637793 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 637793 00:20:52.557 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.557 00:20:52.557 Latency(us) 00:20:52.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.557 =================================================================================================================== 00:20:52.557 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.557 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 637793 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 635666 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 635666 ']' 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 635666 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635666 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635666' 00:20:52.816 killing process with pid 635666 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 635666 00:20:52.816 [2024-07-15 11:31:36.286873] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.816 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 635666 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=638087 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 638087 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 638087 ']' 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.075 11:31:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 [2024-07-15 11:31:36.535954] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:53.075 [2024-07-15 11:31:36.536001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.075 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.075 [2024-07-15 11:31:36.607317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.334 [2024-07-15 11:31:36.685286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.334 [2024-07-15 11:31:36.685318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.334 [2024-07-15 11:31:36.685325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.334 [2024-07-15 11:31:36.685332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.334 [2024-07-15 11:31:36.685337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.334 [2024-07-15 11:31:36.685360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WUYqWABvAG 00:20:53.901 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.161 [2024-07-15 11:31:37.525289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.161 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.161 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.420 [2024-07-15 11:31:37.886210] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.420 [2024-07-15 11:31:37.886392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.420 11:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.679 malloc0 00:20:54.679 11:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:54.939 [2024-07-15 11:31:38.439781] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.939 [2024-07-15 11:31:38.439807] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:54.939 [2024-07-15 11:31:38.439829] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:54.939 request: 00:20:54.939 { 00:20:54.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.939 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.939 "psk": "/tmp/tmp.WUYqWABvAG", 00:20:54.939 "method": "nvmf_subsystem_add_host", 00:20:54.939 "req_id": 1 00:20:54.939 } 00:20:54.939 Got JSON-RPC error response 00:20:54.939 response: 00:20:54.939 { 00:20:54.939 "code": -32603, 00:20:54.939 "message": "Internal error" 00:20:54.939 } 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 638087 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 638087 ']' 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 638087 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638087 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638087' 00:20:54.939 killing process with pid 638087 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 638087 00:20:54.939 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 638087 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.WUYqWABvAG 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=638499 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 638499 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 638499 ']' 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.198 11:31:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.198 [2024-07-15 11:31:38.762058] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:55.198 [2024-07-15 11:31:38.762104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.198 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.459 [2024-07-15 11:31:38.832825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.459 [2024-07-15 11:31:38.899682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.459 [2024-07-15 11:31:38.899728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.459 [2024-07-15 11:31:38.899735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.459 [2024-07-15 11:31:38.899740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.459 [2024-07-15 11:31:38.899745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.459 [2024-07-15 11:31:38.899763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WUYqWABvAG 00:20:56.027 11:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.285 [2024-07-15 11:31:39.743107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.285 11:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.544 11:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.544 [2024-07-15 11:31:40.104052] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.544 [2024-07-15 11:31:40.104245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.544 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.803 malloc0 00:20:56.803 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.062 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:57.062 [2024-07-15 11:31:40.645580] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=638808 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 638808 /var/tmp/bdevperf.sock 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 638808 ']' 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.321 11:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.321 [2024-07-15 11:31:40.722471] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:57.321 [2024-07-15 11:31:40.722524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638808 ] 00:20:57.321 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.321 [2024-07-15 11:31:40.789762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.321 [2024-07-15 11:31:40.863086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.257 11:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.257 11:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.257 11:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:20:58.257 [2024-07-15 11:31:41.682151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.257 [2024-07-15 11:31:41.682230] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.257 TLSTESTn1 00:20:58.257 11:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:58.515 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:58.515 "subsystems": [ 00:20:58.515 { 00:20:58.515 "subsystem": "keyring", 00:20:58.515 "config": [] 00:20:58.515 }, 00:20:58.515 { 00:20:58.515 "subsystem": "iobuf", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "iobuf_set_options", 00:20:58.516 "params": { 00:20:58.516 "small_pool_count": 8192, 00:20:58.516 "large_pool_count": 1024, 00:20:58.516 "small_bufsize": 8192, 00:20:58.516 "large_bufsize": 135168 00:20:58.516 } 00:20:58.516 } 00:20:58.516 ] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "sock", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "sock_set_default_impl", 00:20:58.516 "params": { 00:20:58.516 "impl_name": "posix" 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "sock_impl_set_options", 00:20:58.516 "params": { 00:20:58.516 "impl_name": "ssl", 00:20:58.516 "recv_buf_size": 4096, 00:20:58.516 "send_buf_size": 4096, 00:20:58.516 "enable_recv_pipe": true, 00:20:58.516 "enable_quickack": false, 00:20:58.516 "enable_placement_id": 0, 00:20:58.516 "enable_zerocopy_send_server": true, 00:20:58.516 "enable_zerocopy_send_client": false, 00:20:58.516 "zerocopy_threshold": 0, 00:20:58.516 "tls_version": 0, 00:20:58.516 "enable_ktls": false 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "sock_impl_set_options", 00:20:58.516 "params": { 00:20:58.516 "impl_name": "posix", 00:20:58.516 "recv_buf_size": 2097152, 00:20:58.516 "send_buf_size": 2097152, 00:20:58.516 "enable_recv_pipe": true, 00:20:58.516 "enable_quickack": false, 00:20:58.516 "enable_placement_id": 0, 00:20:58.516 "enable_zerocopy_send_server": true, 00:20:58.516 "enable_zerocopy_send_client": false, 00:20:58.516 "zerocopy_threshold": 0, 00:20:58.516 "tls_version": 0, 00:20:58.516 "enable_ktls": false 00:20:58.516 } 00:20:58.516 } 00:20:58.516 ] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "vmd", 00:20:58.516 "config": [] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "accel", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "accel_set_options", 00:20:58.516 "params": { 00:20:58.516 "small_cache_size": 128, 00:20:58.516 "large_cache_size": 16, 00:20:58.516 "task_count": 2048, 00:20:58.516 "sequence_count": 2048, 00:20:58.516 "buf_count": 2048 00:20:58.516 } 00:20:58.516 } 00:20:58.516 ] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "bdev", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "bdev_set_options", 00:20:58.516 "params": { 00:20:58.516 "bdev_io_pool_size": 65535, 00:20:58.516 "bdev_io_cache_size": 256, 00:20:58.516 "bdev_auto_examine": true, 00:20:58.516 "iobuf_small_cache_size": 128, 00:20:58.516 "iobuf_large_cache_size": 16 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_raid_set_options", 00:20:58.516 "params": { 00:20:58.516 "process_window_size_kb": 1024 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_iscsi_set_options", 00:20:58.516 "params": { 00:20:58.516 "timeout_sec": 30 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_nvme_set_options", 00:20:58.516 "params": { 00:20:58.516 "action_on_timeout": "none", 00:20:58.516 "timeout_us": 0, 00:20:58.516 "timeout_admin_us": 0, 00:20:58.516 "keep_alive_timeout_ms": 10000, 00:20:58.516 "arbitration_burst": 0, 00:20:58.516 "low_priority_weight": 0, 00:20:58.516 "medium_priority_weight": 0, 00:20:58.516 "high_priority_weight": 0, 00:20:58.516 "nvme_adminq_poll_period_us": 10000, 00:20:58.516 "nvme_ioq_poll_period_us": 0, 00:20:58.516 "io_queue_requests": 0, 00:20:58.516 "delay_cmd_submit": true, 00:20:58.516 "transport_retry_count": 4, 00:20:58.516 "bdev_retry_count": 3, 00:20:58.516 "transport_ack_timeout": 0, 00:20:58.516 "ctrlr_loss_timeout_sec": 0, 00:20:58.516 "reconnect_delay_sec": 0, 00:20:58.516 "fast_io_fail_timeout_sec": 0, 00:20:58.516 "disable_auto_failback": false, 00:20:58.516 "generate_uuids": false, 00:20:58.516 "transport_tos": 0, 00:20:58.516 "nvme_error_stat": false, 00:20:58.516 "rdma_srq_size": 0, 00:20:58.516 "io_path_stat": false, 00:20:58.516 "allow_accel_sequence": false, 00:20:58.516 "rdma_max_cq_size": 0, 00:20:58.516 "rdma_cm_event_timeout_ms": 0, 00:20:58.516 "dhchap_digests": [ 00:20:58.516 "sha256", 00:20:58.516 "sha384", 00:20:58.516 "sha512" 00:20:58.516 ], 00:20:58.516 "dhchap_dhgroups": [ 00:20:58.516 "null", 00:20:58.516 "ffdhe2048", 00:20:58.516 "ffdhe3072", 00:20:58.516 "ffdhe4096", 00:20:58.516 "ffdhe6144", 00:20:58.516 "ffdhe8192" 00:20:58.516 ] 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_nvme_set_hotplug", 00:20:58.516 "params": { 00:20:58.516 "period_us": 100000, 00:20:58.516 "enable": false 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_malloc_create", 00:20:58.516 "params": { 00:20:58.516 "name": "malloc0", 00:20:58.516 "num_blocks": 8192, 00:20:58.516 "block_size": 4096, 00:20:58.516 "physical_block_size": 4096, 00:20:58.516 "uuid": "7e8ae0ab-8981-4b85-9b24-a7da57e4d037", 00:20:58.516 "optimal_io_boundary": 0 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "bdev_wait_for_examine" 00:20:58.516 } 00:20:58.516 ] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "nbd", 00:20:58.516 "config": [] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "scheduler", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "framework_set_scheduler", 00:20:58.516 "params": { 00:20:58.516 "name": "static" 00:20:58.516 } 00:20:58.516 } 00:20:58.516 ] 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "subsystem": "nvmf", 00:20:58.516 "config": [ 00:20:58.516 { 00:20:58.516 "method": "nvmf_set_config", 00:20:58.516 "params": { 00:20:58.516 "discovery_filter": "match_any", 00:20:58.516 "admin_cmd_passthru": { 00:20:58.516 "identify_ctrlr": false 00:20:58.516 } 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_set_max_subsystems", 00:20:58.516 "params": { 00:20:58.516 "max_subsystems": 1024 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_set_crdt", 00:20:58.516 "params": { 00:20:58.516 "crdt1": 0, 00:20:58.516 "crdt2": 0, 00:20:58.516 "crdt3": 0 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_create_transport", 00:20:58.516 "params": { 00:20:58.516 "trtype": "TCP", 00:20:58.516 "max_queue_depth": 128, 00:20:58.516 "max_io_qpairs_per_ctrlr": 127, 00:20:58.516 "in_capsule_data_size": 4096, 00:20:58.516 "max_io_size": 131072, 00:20:58.516 "io_unit_size": 131072, 00:20:58.516 "max_aq_depth": 128, 00:20:58.516 "num_shared_buffers": 511, 00:20:58.516 "buf_cache_size": 4294967295, 00:20:58.516 "dif_insert_or_strip": false, 00:20:58.516 "zcopy": false, 00:20:58.516 "c2h_success": false, 00:20:58.516 "sock_priority": 0, 00:20:58.516 "abort_timeout_sec": 1, 00:20:58.516 "ack_timeout": 0, 00:20:58.516 "data_wr_pool_size": 0 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_create_subsystem", 00:20:58.516 "params": { 00:20:58.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.516 "allow_any_host": false, 00:20:58.516 "serial_number": "SPDK00000000000001", 00:20:58.516 "model_number": "SPDK bdev Controller", 00:20:58.516 "max_namespaces": 10, 00:20:58.516 "min_cntlid": 1, 00:20:58.516 "max_cntlid": 65519, 00:20:58.516 "ana_reporting": false 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_subsystem_add_host", 00:20:58.516 "params": { 00:20:58.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.516 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.516 "psk": "/tmp/tmp.WUYqWABvAG" 00:20:58.516 } 00:20:58.516 }, 00:20:58.516 { 00:20:58.516 "method": "nvmf_subsystem_add_ns", 00:20:58.516 "params": { 00:20:58.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.517 "namespace": { 00:20:58.517 "nsid": 1, 00:20:58.517 "bdev_name": "malloc0", 00:20:58.517 "nguid": "7E8AE0AB89814B859B24A7DA57E4D037", 00:20:58.517 "uuid": "7e8ae0ab-8981-4b85-9b24-a7da57e4d037", 00:20:58.517 "no_auto_visible": false 00:20:58.517 } 00:20:58.517 } 00:20:58.517 }, 00:20:58.517 { 00:20:58.517 "method": "nvmf_subsystem_add_listener", 00:20:58.517 "params": { 00:20:58.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.517 "listen_address": { 00:20:58.517 "trtype": "TCP", 00:20:58.517 "adrfam": "IPv4", 00:20:58.517 "traddr": "10.0.0.2", 00:20:58.517 "trsvcid": "4420" 00:20:58.517 }, 00:20:58.517 "secure_channel": true 00:20:58.517 } 00:20:58.517 } 00:20:58.517 ] 00:20:58.517 } 00:20:58.517 ] 00:20:58.517 }' 00:20:58.517 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:58.776 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:58.776 "subsystems": [ 00:20:58.776 { 00:20:58.776 "subsystem": "keyring", 00:20:58.776 "config": [] 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "subsystem": "iobuf", 00:20:58.776 "config": [ 00:20:58.776 { 00:20:58.776 "method": "iobuf_set_options", 00:20:58.776 "params": { 00:20:58.776 "small_pool_count": 8192, 00:20:58.776 "large_pool_count": 1024, 00:20:58.776 "small_bufsize": 8192, 00:20:58.776 "large_bufsize": 135168 00:20:58.776 } 00:20:58.776 } 00:20:58.776 ] 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "subsystem": "sock", 00:20:58.776 "config": [ 00:20:58.776 { 00:20:58.776 "method": "sock_set_default_impl", 00:20:58.776 "params": { 00:20:58.776 "impl_name": "posix" 00:20:58.776 } 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "method": "sock_impl_set_options", 00:20:58.776 "params": { 00:20:58.776 "impl_name": "ssl", 00:20:58.776 "recv_buf_size": 4096, 00:20:58.776 "send_buf_size": 4096, 00:20:58.776 "enable_recv_pipe": true, 00:20:58.776 "enable_quickack": false, 00:20:58.776 "enable_placement_id": 0, 00:20:58.776 "enable_zerocopy_send_server": true, 00:20:58.776 "enable_zerocopy_send_client": false, 00:20:58.776 "zerocopy_threshold": 0, 00:20:58.776 "tls_version": 0, 00:20:58.776 "enable_ktls": false 00:20:58.776 } 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "method": "sock_impl_set_options", 00:20:58.776 "params": { 00:20:58.776 "impl_name": "posix", 00:20:58.776 "recv_buf_size": 2097152, 00:20:58.776 "send_buf_size": 2097152, 00:20:58.776 "enable_recv_pipe": true, 00:20:58.776 "enable_quickack": false, 00:20:58.776 "enable_placement_id": 0, 00:20:58.776 "enable_zerocopy_send_server": true, 00:20:58.776 "enable_zerocopy_send_client": false, 00:20:58.776 "zerocopy_threshold": 0, 00:20:58.776 "tls_version": 0, 00:20:58.776 "enable_ktls": false 00:20:58.776 } 00:20:58.776 } 00:20:58.776 ] 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "subsystem": "vmd", 00:20:58.776 "config": [] 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "subsystem": "accel", 00:20:58.776 "config": [ 00:20:58.776 { 00:20:58.776 "method": "accel_set_options", 00:20:58.776 "params": { 00:20:58.776 "small_cache_size": 128, 00:20:58.776 "large_cache_size": 16, 00:20:58.776 "task_count": 2048, 00:20:58.776 "sequence_count": 2048, 00:20:58.776 "buf_count": 2048 00:20:58.776 } 00:20:58.776 } 00:20:58.776 ] 00:20:58.776 }, 00:20:58.776 { 00:20:58.776 "subsystem": "bdev", 00:20:58.776 "config": [ 00:20:58.776 { 00:20:58.776 "method": "bdev_set_options", 00:20:58.777 "params": { 00:20:58.777 "bdev_io_pool_size": 65535, 00:20:58.777 "bdev_io_cache_size": 256, 00:20:58.777 "bdev_auto_examine": true, 00:20:58.777 "iobuf_small_cache_size": 128, 00:20:58.777 "iobuf_large_cache_size": 16 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_raid_set_options", 00:20:58.777 "params": { 00:20:58.777 "process_window_size_kb": 1024 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_iscsi_set_options", 00:20:58.777 "params": { 00:20:58.777 "timeout_sec": 30 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_nvme_set_options", 00:20:58.777 "params": { 00:20:58.777 "action_on_timeout": "none", 00:20:58.777 "timeout_us": 0, 00:20:58.777 "timeout_admin_us": 0, 00:20:58.777 "keep_alive_timeout_ms": 10000, 00:20:58.777 "arbitration_burst": 0, 00:20:58.777 "low_priority_weight": 0, 00:20:58.777 "medium_priority_weight": 0, 00:20:58.777 "high_priority_weight": 0, 00:20:58.777 "nvme_adminq_poll_period_us": 10000, 00:20:58.777 "nvme_ioq_poll_period_us": 0, 00:20:58.777 "io_queue_requests": 512, 00:20:58.777 "delay_cmd_submit": true, 00:20:58.777 "transport_retry_count": 4, 00:20:58.777 "bdev_retry_count": 3, 00:20:58.777 "transport_ack_timeout": 0, 00:20:58.777 "ctrlr_loss_timeout_sec": 0, 00:20:58.777 "reconnect_delay_sec": 0, 00:20:58.777 "fast_io_fail_timeout_sec": 0, 00:20:58.777 "disable_auto_failback": false, 00:20:58.777 "generate_uuids": false, 00:20:58.777 "transport_tos": 0, 00:20:58.777 "nvme_error_stat": false, 00:20:58.777 "rdma_srq_size": 0, 00:20:58.777 "io_path_stat": false, 00:20:58.777 "allow_accel_sequence": false, 00:20:58.777 "rdma_max_cq_size": 0, 00:20:58.777 "rdma_cm_event_timeout_ms": 0, 00:20:58.777 "dhchap_digests": [ 00:20:58.777 "sha256", 00:20:58.777 "sha384", 00:20:58.777 "sha512" 00:20:58.777 ], 00:20:58.777 "dhchap_dhgroups": [ 00:20:58.777 "null", 00:20:58.777 "ffdhe2048", 00:20:58.777 "ffdhe3072", 00:20:58.777 "ffdhe4096", 00:20:58.777 "ffdhe6144", 00:20:58.777 "ffdhe8192" 00:20:58.777 ] 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_nvme_attach_controller", 00:20:58.777 "params": { 00:20:58.777 "name": "TLSTEST", 00:20:58.777 "trtype": "TCP", 00:20:58.777 "adrfam": "IPv4", 00:20:58.777 "traddr": "10.0.0.2", 00:20:58.777 "trsvcid": "4420", 00:20:58.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.777 "prchk_reftag": false, 00:20:58.777 "prchk_guard": false, 00:20:58.777 "ctrlr_loss_timeout_sec": 0, 00:20:58.777 "reconnect_delay_sec": 0, 00:20:58.777 "fast_io_fail_timeout_sec": 0, 00:20:58.777 "psk": "/tmp/tmp.WUYqWABvAG", 00:20:58.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.777 "hdgst": false, 00:20:58.777 "ddgst": false 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_nvme_set_hotplug", 00:20:58.777 "params": { 00:20:58.777 "period_us": 100000, 00:20:58.777 "enable": false 00:20:58.777 } 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "method": "bdev_wait_for_examine" 00:20:58.777 } 00:20:58.777 ] 00:20:58.777 }, 00:20:58.777 { 00:20:58.777 "subsystem": "nbd", 00:20:58.777 "config": [] 00:20:58.777 } 00:20:58.777 ] 00:20:58.777 }' 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 638808 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 638808 ']' 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 638808 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638808 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638808' 00:20:58.777 killing process with pid 638808 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 638808 00:20:58.777 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.777 00:20:58.777 Latency(us) 00:20:58.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.777 =================================================================================================================== 00:20:58.777 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.777 [2024-07-15 11:31:42.326048] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.777 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 638808 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 638499 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 638499 ']' 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 638499 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638499 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638499' 00:20:59.035 killing process with pid 638499 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 638499 00:20:59.035 [2024-07-15 11:31:42.554567] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.035 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 638499 00:20:59.295 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:59.295 11:31:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.295 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.295 11:31:42 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:59.295 "subsystems": [ 00:20:59.295 { 00:20:59.295 "subsystem": "keyring", 00:20:59.295 "config": [] 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "subsystem": "iobuf", 00:20:59.295 "config": [ 00:20:59.295 { 00:20:59.295 "method": "iobuf_set_options", 00:20:59.295 "params": { 00:20:59.295 "small_pool_count": 8192, 00:20:59.295 "large_pool_count": 1024, 00:20:59.295 "small_bufsize": 8192, 00:20:59.295 "large_bufsize": 135168 00:20:59.295 } 00:20:59.295 } 00:20:59.295 ] 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "subsystem": "sock", 00:20:59.295 "config": [ 00:20:59.295 { 00:20:59.295 "method": "sock_set_default_impl", 00:20:59.295 "params": { 00:20:59.295 "impl_name": "posix" 00:20:59.295 } 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "method": "sock_impl_set_options", 00:20:59.295 "params": { 00:20:59.295 "impl_name": "ssl", 00:20:59.295 "recv_buf_size": 4096, 00:20:59.295 "send_buf_size": 4096, 00:20:59.295 "enable_recv_pipe": true, 00:20:59.295 "enable_quickack": false, 00:20:59.295 "enable_placement_id": 0, 00:20:59.295 "enable_zerocopy_send_server": true, 00:20:59.295 "enable_zerocopy_send_client": false, 00:20:59.295 "zerocopy_threshold": 0, 00:20:59.295 "tls_version": 0, 00:20:59.295 "enable_ktls": false 00:20:59.295 } 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "method": "sock_impl_set_options", 00:20:59.295 "params": { 00:20:59.295 "impl_name": "posix", 00:20:59.295 "recv_buf_size": 2097152, 00:20:59.295 "send_buf_size": 2097152, 00:20:59.295 "enable_recv_pipe": true, 00:20:59.295 "enable_quickack": false, 00:20:59.295 "enable_placement_id": 0, 00:20:59.295 "enable_zerocopy_send_server": true, 00:20:59.295 "enable_zerocopy_send_client": false, 00:20:59.295 "zerocopy_threshold": 0, 00:20:59.295 "tls_version": 0, 00:20:59.295 "enable_ktls": false 00:20:59.295 } 00:20:59.295 } 00:20:59.295 ] 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "subsystem": "vmd", 00:20:59.295 "config": [] 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "subsystem": "accel", 00:20:59.295 "config": [ 00:20:59.295 { 00:20:59.295 "method": "accel_set_options", 00:20:59.295 "params": { 00:20:59.295 "small_cache_size": 128, 00:20:59.295 "large_cache_size": 16, 00:20:59.295 "task_count": 2048, 00:20:59.295 "sequence_count": 2048, 00:20:59.295 "buf_count": 2048 00:20:59.295 } 00:20:59.295 } 00:20:59.295 ] 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "subsystem": "bdev", 00:20:59.295 "config": [ 00:20:59.295 { 00:20:59.295 "method": "bdev_set_options", 00:20:59.295 "params": { 00:20:59.295 "bdev_io_pool_size": 65535, 00:20:59.295 "bdev_io_cache_size": 256, 00:20:59.295 "bdev_auto_examine": true, 00:20:59.295 "iobuf_small_cache_size": 128, 00:20:59.295 "iobuf_large_cache_size": 16 00:20:59.295 } 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "method": "bdev_raid_set_options", 00:20:59.295 "params": { 00:20:59.295 "process_window_size_kb": 1024 00:20:59.295 } 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "method": "bdev_iscsi_set_options", 00:20:59.295 "params": { 00:20:59.295 "timeout_sec": 30 00:20:59.295 } 00:20:59.295 }, 00:20:59.295 { 00:20:59.295 "method": "bdev_nvme_set_options", 00:20:59.295 "params": { 00:20:59.295 "action_on_timeout": "none", 00:20:59.295 "timeout_us": 0, 00:20:59.295 "timeout_admin_us": 0, 00:20:59.295 "keep_alive_timeout_ms": 10000, 00:20:59.295 "arbitration_burst": 0, 00:20:59.295 "low_priority_weight": 0, 00:20:59.295 "medium_priority_weight": 0, 00:20:59.295 "high_priority_weight": 0, 00:20:59.295 "nvme_adminq_poll_period_us": 10000, 00:20:59.295 "nvme_ioq_poll_period_us": 0, 00:20:59.295 "io_queue_requests": 0, 00:20:59.295 "delay_cmd_submit": true, 00:20:59.295 "transport_retry_count": 4, 00:20:59.295 "bdev_retry_count": 3, 00:20:59.295 "transport_ack_timeout": 0, 00:20:59.295 "ctrlr_loss_timeout_sec": 0, 00:20:59.295 "reconnect_delay_sec": 0, 00:20:59.295 "fast_io_fail_timeout_sec": 0, 00:20:59.295 "disable_auto_failback": false, 00:20:59.295 "generate_uuids": false, 00:20:59.295 "transport_tos": 0, 00:20:59.295 "nvme_error_stat": false, 00:20:59.295 "rdma_srq_size": 0, 00:20:59.295 "io_path_stat": false, 00:20:59.295 "allow_accel_sequence": false, 00:20:59.295 "rdma_max_cq_size": 0, 00:20:59.295 "rdma_cm_event_timeout_ms": 0, 00:20:59.295 "dhchap_digests": [ 00:20:59.295 "sha256", 00:20:59.295 "sha384", 00:20:59.295 "sha512" 00:20:59.295 ], 00:20:59.295 "dhchap_dhgroups": [ 00:20:59.295 "null", 00:20:59.295 "ffdhe2048", 00:20:59.295 "ffdhe3072", 00:20:59.295 "ffdhe4096", 00:20:59.295 "ffdhe6144", 00:20:59.295 "ffdhe8192" 00:20:59.295 ] 00:20:59.295 } 00:20:59.295 }, 00:20:59.296 { 00:20:59.296 "method": "bdev_nvme_set_hotplug", 00:20:59.296 "params": { 00:20:59.296 "period_us": 100000, 00:20:59.296 "enable": false 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "bdev_malloc_create", 00:20:59.296 "params": { 00:20:59.296 "name": "malloc0", 00:20:59.296 "num_blocks": 8192, 00:20:59.296 "block_size": 4096, 00:20:59.296 "physical_block_size": 4096, 00:20:59.296 "uuid": "7e8ae0ab-8981-4b85-9b24-a7da57e4d037", 00:20:59.296 "optimal_io_boundary": 0 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "bdev_wait_for_examine" 00:20:59.296 } 00:20:59.296 ] 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "subsystem": "nbd", 00:20:59.296 "config": [] 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "subsystem": "scheduler", 00:20:59.296 "config": [ 00:20:59.296 { 00:20:59.296 "method": "framework_set_scheduler", 00:20:59.296 "params": { 00:20:59.296 "name": "static" 00:20:59.296 } 00:20:59.296 } 00:20:59.296 ] 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "subsystem": "nvmf", 00:20:59.296 "config": [ 00:20:59.296 { 00:20:59.296 "method": "nvmf_set_config", 00:20:59.296 "params": { 00:20:59.296 "discovery_filter": "match_any", 00:20:59.296 "admin_cmd_passthru": { 00:20:59.296 "identify_ctrlr": false 00:20:59.296 } 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_set_max_subsystems", 00:20:59.296 "params": { 00:20:59.296 "max_subsystems": 1024 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_set_crdt", 00:20:59.296 "params": { 00:20:59.296 "crdt1": 0, 00:20:59.296 "crdt2": 0, 00:20:59.296 "crdt3": 0 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_create_transport", 00:20:59.296 "params": { 00:20:59.296 "trtype": "TCP", 00:20:59.296 "max_queue_depth": 128, 00:20:59.296 "max_io_qpairs_per_ctrlr": 127, 00:20:59.296 "in_capsule_data_size": 4096, 00:20:59.296 "max_io_size": 131072, 00:20:59.296 "io_unit_size": 131072, 00:20:59.296 "max_aq_depth": 128, 00:20:59.296 "num_shared_buffers": 511, 00:20:59.296 "buf_cache_size": 4294967295, 00:20:59.296 "dif_insert_or_strip": false, 00:20:59.296 "zcopy": false, 00:20:59.296 "c2h_success": false, 00:20:59.296 "sock_priority": 0, 00:20:59.296 "abort_timeout_sec": 1, 00:20:59.296 "ack_timeout": 0, 00:20:59.296 "data_wr_pool_size": 0 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_create_subsystem", 00:20:59.296 "params": { 00:20:59.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.296 "allow_any_host": false, 00:20:59.296 "serial_number": "SPDK00000000000001", 00:20:59.296 "model_number": "SPDK bdev Controller", 00:20:59.296 "max_namespaces": 10, 00:20:59.296 "min_cntlid": 1, 00:20:59.296 "max_cntlid": 65519, 00:20:59.296 "ana_reporting": false 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_subsystem_add_host", 00:20:59.296 "params": { 00:20:59.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.296 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.296 "psk": "/tmp/tmp.WUYqWABvAG" 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_subsystem_add_ns", 00:20:59.296 "params": { 00:20:59.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.296 "namespace": { 00:20:59.296 "nsid": 1, 00:20:59.296 "bdev_name": "malloc0", 00:20:59.296 "nguid": "7E8AE0AB89814B859B24A7DA57E4D037", 00:20:59.296 "uuid": "7e8ae0ab-8981-4b85-9b24-a7da57e4d037", 00:20:59.296 "no_auto_visible": false 00:20:59.296 } 00:20:59.296 } 00:20:59.296 }, 00:20:59.296 { 00:20:59.296 "method": "nvmf_subsystem_add_listener", 00:20:59.296 "params": { 00:20:59.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.296 "listen_address": { 00:20:59.296 "trtype": "TCP", 00:20:59.296 "adrfam": "IPv4", 00:20:59.296 "traddr": "10.0.0.2", 00:20:59.296 "trsvcid": "4420" 00:20:59.296 }, 00:20:59.296 "secure_channel": true 00:20:59.296 } 00:20:59.296 } 00:20:59.296 ] 00:20:59.296 } 00:20:59.296 ] 00:20:59.296 }' 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=639229 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 639229 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 639229 ']' 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.296 11:31:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.296 [2024-07-15 11:31:42.802984] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:59.296 [2024-07-15 11:31:42.803029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.296 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.296 [2024-07-15 11:31:42.863957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.555 [2024-07-15 11:31:42.943511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.555 [2024-07-15 11:31:42.943547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.555 [2024-07-15 11:31:42.943554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.555 [2024-07-15 11:31:42.943560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.555 [2024-07-15 11:31:42.943565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.555 [2024-07-15 11:31:42.943616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.555 [2024-07-15 11:31:43.145147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.823 [2024-07-15 11:31:43.161119] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.823 [2024-07-15 11:31:43.177160] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.823 [2024-07-15 11:31:43.188539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=639382 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 639382 /var/tmp/bdevperf.sock 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 639382 ']' 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:00.083 "subsystems": [ 00:21:00.083 { 00:21:00.083 "subsystem": "keyring", 00:21:00.083 "config": [] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "iobuf", 00:21:00.083 "config": [ 00:21:00.083 { 00:21:00.083 "method": "iobuf_set_options", 00:21:00.083 "params": { 00:21:00.083 "small_pool_count": 8192, 00:21:00.083 "large_pool_count": 1024, 00:21:00.083 "small_bufsize": 8192, 00:21:00.083 "large_bufsize": 135168 00:21:00.083 } 00:21:00.083 } 00:21:00.083 ] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "sock", 00:21:00.083 "config": [ 00:21:00.083 { 00:21:00.083 "method": "sock_set_default_impl", 00:21:00.083 "params": { 00:21:00.083 "impl_name": "posix" 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "sock_impl_set_options", 00:21:00.083 "params": { 00:21:00.083 "impl_name": "ssl", 00:21:00.083 "recv_buf_size": 4096, 00:21:00.083 "send_buf_size": 4096, 00:21:00.083 "enable_recv_pipe": true, 00:21:00.083 "enable_quickack": false, 00:21:00.083 "enable_placement_id": 0, 00:21:00.083 "enable_zerocopy_send_server": true, 00:21:00.083 "enable_zerocopy_send_client": false, 00:21:00.083 "zerocopy_threshold": 0, 00:21:00.083 "tls_version": 0, 00:21:00.083 "enable_ktls": false 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "sock_impl_set_options", 00:21:00.083 "params": { 00:21:00.083 "impl_name": "posix", 00:21:00.083 "recv_buf_size": 2097152, 00:21:00.083 "send_buf_size": 2097152, 00:21:00.083 "enable_recv_pipe": true, 00:21:00.083 "enable_quickack": false, 00:21:00.083 "enable_placement_id": 0, 00:21:00.083 "enable_zerocopy_send_server": true, 00:21:00.083 "enable_zerocopy_send_client": false, 00:21:00.083 "zerocopy_threshold": 0, 00:21:00.083 "tls_version": 0, 00:21:00.083 "enable_ktls": false 00:21:00.083 } 00:21:00.083 } 00:21:00.083 ] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "vmd", 00:21:00.083 "config": [] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "accel", 00:21:00.083 "config": [ 00:21:00.083 { 00:21:00.083 "method": "accel_set_options", 00:21:00.083 "params": { 00:21:00.083 "small_cache_size": 128, 00:21:00.083 "large_cache_size": 16, 00:21:00.083 "task_count": 2048, 00:21:00.083 "sequence_count": 2048, 00:21:00.083 "buf_count": 2048 00:21:00.083 } 00:21:00.083 } 00:21:00.083 ] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "bdev", 00:21:00.083 "config": [ 00:21:00.083 { 00:21:00.083 "method": "bdev_set_options", 00:21:00.083 "params": { 00:21:00.083 "bdev_io_pool_size": 65535, 00:21:00.083 "bdev_io_cache_size": 256, 00:21:00.083 "bdev_auto_examine": true, 00:21:00.083 "iobuf_small_cache_size": 128, 00:21:00.083 "iobuf_large_cache_size": 16 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_raid_set_options", 00:21:00.083 "params": { 00:21:00.083 "process_window_size_kb": 1024 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_iscsi_set_options", 00:21:00.083 "params": { 00:21:00.083 "timeout_sec": 30 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_nvme_set_options", 00:21:00.083 "params": { 00:21:00.083 "action_on_timeout": "none", 00:21:00.083 "timeout_us": 0, 00:21:00.083 "timeout_admin_us": 0, 00:21:00.083 "keep_alive_timeout_ms": 10000, 00:21:00.083 "arbitration_burst": 0, 00:21:00.083 "low_priority_weight": 0, 00:21:00.083 "medium_priority_weight": 0, 00:21:00.083 "high_priority_weight": 0, 00:21:00.083 "nvme_adminq_poll_period_us": 10000, 00:21:00.083 "nvme_ioq_poll_period_us": 0, 00:21:00.083 "io_queue_requests": 512, 00:21:00.083 "delay_cmd_submit": true, 00:21:00.083 "transport_retry_count": 4, 00:21:00.083 "bdev_retry_count": 3, 00:21:00.083 "transport_ack_timeout": 0, 00:21:00.083 "ctrlr_loss_timeout_sec": 0, 00:21:00.083 "reconnect_delay_sec": 0, 00:21:00.083 "fast_io_fail_timeout_sec": 0, 00:21:00.083 "disable_auto_failback": false, 00:21:00.083 "generate_uuids": false, 00:21:00.083 "transport_tos": 0, 00:21:00.083 "nvme_error_stat": false, 00:21:00.083 "rdma_srq_size": 0, 00:21:00.083 "io_path_stat": false, 00:21:00.083 "allow_accel_sequence": false, 00:21:00.083 "rdma_max_cq_size": 0, 00:21:00.083 "rdma_cm_event_timeout_ms": 0, 00:21:00.083 "dhchap_digests": [ 00:21:00.083 "sha256", 00:21:00.083 "sha384", 00:21:00.083 "sha512" 00:21:00.083 ], 00:21:00.083 "dhchap_dhgroups": [ 00:21:00.083 "null", 00:21:00.083 "ffdhe2048", 00:21:00.083 "ffdhe3072", 00:21:00.083 "ffdhe4096", 00:21:00.083 "ffdhe6144", 00:21:00.083 "ffdhe8192" 00:21:00.083 ] 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_nvme_attach_controller", 00:21:00.083 "params": { 00:21:00.083 "name": "TLSTEST", 00:21:00.083 "trtype": "TCP", 00:21:00.083 "adrfam": "IPv4", 00:21:00.083 "traddr": "10.0.0.2", 00:21:00.083 "trsvcid": "4420", 00:21:00.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.083 "prchk_reftag": false, 00:21:00.083 "prchk_guard": false, 00:21:00.083 "ctrlr_loss_timeout_sec": 0, 00:21:00.083 "reconnect_delay_sec": 0, 00:21:00.083 "fast_io_fail_timeout_sec": 0, 00:21:00.083 "psk": "/tmp/tmp.WUYqWABvAG", 00:21:00.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.083 "hdgst": false, 00:21:00.083 "ddgst": false 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_nvme_set_hotplug", 00:21:00.083 "params": { 00:21:00.083 "period_us": 100000, 00:21:00.083 "enable": false 00:21:00.083 } 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "method": "bdev_wait_for_examine" 00:21:00.083 } 00:21:00.083 ] 00:21:00.083 }, 00:21:00.083 { 00:21:00.083 "subsystem": "nbd", 00:21:00.083 "config": [] 00:21:00.083 } 00:21:00.083 ] 00:21:00.083 }' 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.083 11:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.343 [2024-07-15 11:31:43.678040] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:00.343 [2024-07-15 11:31:43.678087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639382 ] 00:21:00.343 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.343 [2024-07-15 11:31:43.746154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.343 [2024-07-15 11:31:43.824746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.602 [2024-07-15 11:31:43.967603] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.602 [2024-07-15 11:31:43.967685] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.170 11:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.170 11:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.170 11:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.170 Running I/O for 10 seconds... 00:21:11.171 00:21:11.171 Latency(us) 00:21:11.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.171 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.171 Verification LBA range: start 0x0 length 0x2000 00:21:11.171 TLSTESTn1 : 10.02 3469.61 13.55 0.00 0.00 36840.58 6411.13 60635.05 00:21:11.171 =================================================================================================================== 00:21:11.171 Total : 3469.61 13.55 0.00 0.00 36840.58 6411.13 60635.05 00:21:11.171 0 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 639382 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 639382 ']' 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 639382 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639382 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639382' 00:21:11.171 killing process with pid 639382 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 639382 00:21:11.171 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.171 00:21:11.171 Latency(us) 00:21:11.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.171 =================================================================================================================== 00:21:11.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.171 [2024-07-15 11:31:54.687336] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:11.171 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 639382 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 639229 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 639229 ']' 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 639229 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639229 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639229' 00:21:11.430 killing process with pid 639229 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 639229 00:21:11.430 [2024-07-15 11:31:54.908464] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:11.430 11:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 639229 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=641287 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 641287 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 641287 ']' 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.689 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.690 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.690 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.690 [2024-07-15 11:31:55.150566] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:11.690 [2024-07-15 11:31:55.150615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.690 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.690 [2024-07-15 11:31:55.217126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.948 [2024-07-15 11:31:55.296875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.948 [2024-07-15 11:31:55.296907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.948 [2024-07-15 11:31:55.296914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.948 [2024-07-15 11:31:55.296920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.948 [2024-07-15 11:31:55.296925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.948 [2024-07-15 11:31:55.296942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.WUYqWABvAG 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WUYqWABvAG 00:21:12.516 11:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.774 [2024-07-15 11:31:56.136684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.774 11:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.775 11:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.033 [2024-07-15 11:31:56.521687] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.033 [2024-07-15 11:31:56.521879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.033 11:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:13.291 malloc0 00:21:13.291 11:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:13.549 11:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WUYqWABvAG 00:21:13.549 [2024-07-15 11:31:57.067427] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=641577 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 641577 /var/tmp/bdevperf.sock 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 641577 ']' 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.549 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.807 [2024-07-15 11:31:57.140833] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:13.807 [2024-07-15 11:31:57.140875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641577 ] 00:21:13.807 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.807 [2024-07-15 11:31:57.206596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.807 [2024-07-15 11:31:57.280133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.434 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.434 11:31:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:14.434 11:31:57 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WUYqWABvAG 00:21:14.692 11:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:14.692 [2024-07-15 11:31:58.280179] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.950 nvme0n1 00:21:14.950 11:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.950 Running I/O for 1 seconds... 00:21:15.885 00:21:15.885 Latency(us) 00:21:15.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.885 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:15.885 Verification LBA range: start 0x0 length 0x2000 00:21:15.886 nvme0n1 : 1.01 5517.23 21.55 0.00 0.00 23034.38 5841.25 30089.57 00:21:15.886 =================================================================================================================== 00:21:15.886 Total : 5517.23 21.55 0.00 0.00 23034.38 5841.25 30089.57 00:21:15.886 0 00:21:16.143 11:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 641577 00:21:16.143 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 641577 ']' 00:21:16.143 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 641577 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 641577 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 641577' 00:21:16.144 killing process with pid 641577 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 641577 00:21:16.144 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.144 00:21:16.144 Latency(us) 00:21:16.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.144 =================================================================================================================== 00:21:16.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 641577 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 641287 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 641287 ']' 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 641287 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.144 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 641287 00:21:16.402 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.402 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 641287' 00:21:16.403 killing process with pid 641287 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 641287 00:21:16.403 [2024-07-15 11:31:59.769791] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 641287 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=642054 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 642054 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 642054 ']' 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.403 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.661 [2024-07-15 11:32:00.015450] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:16.661 [2024-07-15 11:32:00.015499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.661 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.661 [2024-07-15 11:32:00.086857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.661 [2024-07-15 11:32:00.162815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.661 [2024-07-15 11:32:00.162848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.661 [2024-07-15 11:32:00.162855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.661 [2024-07-15 11:32:00.162862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.661 [2024-07-15 11:32:00.162867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.661 [2024-07-15 11:32:00.162885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.596 [2024-07-15 11:32:00.870949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.596 malloc0 00:21:17.596 [2024-07-15 11:32:00.899288] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.596 [2024-07-15 11:32:00.899479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=642300 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 642300 /var/tmp/bdevperf.sock 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 642300 ']' 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.596 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.596 [2024-07-15 11:32:00.974058] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:17.596 [2024-07-15 11:32:00.974099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642300 ] 00:21:17.596 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.596 [2024-07-15 11:32:01.037747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.596 [2024-07-15 11:32:01.120539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.530 11:32:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.530 11:32:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.530 11:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WUYqWABvAG 00:21:18.531 11:32:01 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:18.531 [2024-07-15 11:32:02.112604] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.789 nvme0n1 00:21:18.789 11:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.789 Running I/O for 1 seconds... 00:21:19.752 00:21:19.752 Latency(us) 00:21:19.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.752 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.752 Verification LBA range: start 0x0 length 0x2000 00:21:19.752 nvme0n1 : 1.01 5672.73 22.16 0.00 0.00 22387.98 4843.97 23478.98 00:21:19.752 =================================================================================================================== 00:21:19.752 Total : 5672.73 22.16 0.00 0.00 22387.98 4843.97 23478.98 00:21:19.752 0 00:21:19.752 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:19.752 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.752 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.010 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.010 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:20.010 "subsystems": [ 00:21:20.010 { 00:21:20.010 "subsystem": "keyring", 00:21:20.010 "config": [ 00:21:20.010 { 00:21:20.010 "method": "keyring_file_add_key", 00:21:20.010 "params": { 00:21:20.010 "name": "key0", 00:21:20.010 "path": "/tmp/tmp.WUYqWABvAG" 00:21:20.010 } 00:21:20.010 } 00:21:20.010 ] 00:21:20.010 }, 00:21:20.010 { 00:21:20.010 "subsystem": "iobuf", 00:21:20.010 "config": [ 00:21:20.010 { 00:21:20.010 "method": "iobuf_set_options", 00:21:20.010 "params": { 00:21:20.010 "small_pool_count": 8192, 00:21:20.010 "large_pool_count": 1024, 00:21:20.010 "small_bufsize": 8192, 00:21:20.010 "large_bufsize": 135168 00:21:20.010 } 00:21:20.010 } 00:21:20.010 ] 00:21:20.010 }, 00:21:20.010 { 00:21:20.010 "subsystem": "sock", 00:21:20.010 "config": [ 00:21:20.010 { 00:21:20.010 "method": "sock_set_default_impl", 00:21:20.010 "params": { 00:21:20.010 "impl_name": "posix" 00:21:20.010 } 00:21:20.010 }, 00:21:20.010 { 00:21:20.010 "method": "sock_impl_set_options", 00:21:20.010 "params": { 00:21:20.010 "impl_name": "ssl", 00:21:20.010 "recv_buf_size": 4096, 00:21:20.010 "send_buf_size": 4096, 00:21:20.010 "enable_recv_pipe": true, 00:21:20.010 "enable_quickack": false, 00:21:20.010 "enable_placement_id": 0, 00:21:20.010 "enable_zerocopy_send_server": true, 00:21:20.010 "enable_zerocopy_send_client": false, 00:21:20.010 "zerocopy_threshold": 0, 00:21:20.010 "tls_version": 0, 00:21:20.010 "enable_ktls": false 00:21:20.010 } 00:21:20.010 }, 00:21:20.010 { 00:21:20.010 "method": "sock_impl_set_options", 00:21:20.010 "params": { 00:21:20.010 "impl_name": "posix", 00:21:20.010 "recv_buf_size": 2097152, 00:21:20.010 "send_buf_size": 2097152, 00:21:20.010 "enable_recv_pipe": true, 00:21:20.010 "enable_quickack": false, 00:21:20.010 "enable_placement_id": 0, 00:21:20.011 "enable_zerocopy_send_server": true, 00:21:20.011 "enable_zerocopy_send_client": false, 00:21:20.011 "zerocopy_threshold": 0, 00:21:20.011 "tls_version": 0, 00:21:20.011 "enable_ktls": false 00:21:20.011 } 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "vmd", 00:21:20.011 "config": [] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "accel", 00:21:20.011 "config": [ 00:21:20.011 { 00:21:20.011 "method": "accel_set_options", 00:21:20.011 "params": { 00:21:20.011 "small_cache_size": 128, 00:21:20.011 "large_cache_size": 16, 00:21:20.011 "task_count": 2048, 00:21:20.011 "sequence_count": 2048, 00:21:20.011 "buf_count": 2048 00:21:20.011 } 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "bdev", 00:21:20.011 "config": [ 00:21:20.011 { 00:21:20.011 "method": "bdev_set_options", 00:21:20.011 "params": { 00:21:20.011 "bdev_io_pool_size": 65535, 00:21:20.011 "bdev_io_cache_size": 256, 00:21:20.011 "bdev_auto_examine": true, 00:21:20.011 "iobuf_small_cache_size": 128, 00:21:20.011 "iobuf_large_cache_size": 16 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_raid_set_options", 00:21:20.011 "params": { 00:21:20.011 "process_window_size_kb": 1024 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_iscsi_set_options", 00:21:20.011 "params": { 00:21:20.011 "timeout_sec": 30 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_nvme_set_options", 00:21:20.011 "params": { 00:21:20.011 "action_on_timeout": "none", 00:21:20.011 "timeout_us": 0, 00:21:20.011 "timeout_admin_us": 0, 00:21:20.011 "keep_alive_timeout_ms": 10000, 00:21:20.011 "arbitration_burst": 0, 00:21:20.011 "low_priority_weight": 0, 00:21:20.011 "medium_priority_weight": 0, 00:21:20.011 "high_priority_weight": 0, 00:21:20.011 "nvme_adminq_poll_period_us": 10000, 00:21:20.011 "nvme_ioq_poll_period_us": 0, 00:21:20.011 "io_queue_requests": 0, 00:21:20.011 "delay_cmd_submit": true, 00:21:20.011 "transport_retry_count": 4, 00:21:20.011 "bdev_retry_count": 3, 00:21:20.011 "transport_ack_timeout": 0, 00:21:20.011 "ctrlr_loss_timeout_sec": 0, 00:21:20.011 "reconnect_delay_sec": 0, 00:21:20.011 "fast_io_fail_timeout_sec": 0, 00:21:20.011 "disable_auto_failback": false, 00:21:20.011 "generate_uuids": false, 00:21:20.011 "transport_tos": 0, 00:21:20.011 "nvme_error_stat": false, 00:21:20.011 "rdma_srq_size": 0, 00:21:20.011 "io_path_stat": false, 00:21:20.011 "allow_accel_sequence": false, 00:21:20.011 "rdma_max_cq_size": 0, 00:21:20.011 "rdma_cm_event_timeout_ms": 0, 00:21:20.011 "dhchap_digests": [ 00:21:20.011 "sha256", 00:21:20.011 "sha384", 00:21:20.011 "sha512" 00:21:20.011 ], 00:21:20.011 "dhchap_dhgroups": [ 00:21:20.011 "null", 00:21:20.011 "ffdhe2048", 00:21:20.011 "ffdhe3072", 00:21:20.011 "ffdhe4096", 00:21:20.011 "ffdhe6144", 00:21:20.011 "ffdhe8192" 00:21:20.011 ] 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_nvme_set_hotplug", 00:21:20.011 "params": { 00:21:20.011 "period_us": 100000, 00:21:20.011 "enable": false 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_malloc_create", 00:21:20.011 "params": { 00:21:20.011 "name": "malloc0", 00:21:20.011 "num_blocks": 8192, 00:21:20.011 "block_size": 4096, 00:21:20.011 "physical_block_size": 4096, 00:21:20.011 "uuid": "e9e1d9fb-10ba-4ea1-b5ac-fcf22b31d93b", 00:21:20.011 "optimal_io_boundary": 0 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "bdev_wait_for_examine" 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "nbd", 00:21:20.011 "config": [] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "scheduler", 00:21:20.011 "config": [ 00:21:20.011 { 00:21:20.011 "method": "framework_set_scheduler", 00:21:20.011 "params": { 00:21:20.011 "name": "static" 00:21:20.011 } 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "subsystem": "nvmf", 00:21:20.011 "config": [ 00:21:20.011 { 00:21:20.011 "method": "nvmf_set_config", 00:21:20.011 "params": { 00:21:20.011 "discovery_filter": "match_any", 00:21:20.011 "admin_cmd_passthru": { 00:21:20.011 "identify_ctrlr": false 00:21:20.011 } 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_set_max_subsystems", 00:21:20.011 "params": { 00:21:20.011 "max_subsystems": 1024 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_set_crdt", 00:21:20.011 "params": { 00:21:20.011 "crdt1": 0, 00:21:20.011 "crdt2": 0, 00:21:20.011 "crdt3": 0 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_create_transport", 00:21:20.011 "params": { 00:21:20.011 "trtype": "TCP", 00:21:20.011 "max_queue_depth": 128, 00:21:20.011 "max_io_qpairs_per_ctrlr": 127, 00:21:20.011 "in_capsule_data_size": 4096, 00:21:20.011 "max_io_size": 131072, 00:21:20.011 "io_unit_size": 131072, 00:21:20.011 "max_aq_depth": 128, 00:21:20.011 "num_shared_buffers": 511, 00:21:20.011 "buf_cache_size": 4294967295, 00:21:20.011 "dif_insert_or_strip": false, 00:21:20.011 "zcopy": false, 00:21:20.011 "c2h_success": false, 00:21:20.011 "sock_priority": 0, 00:21:20.011 "abort_timeout_sec": 1, 00:21:20.011 "ack_timeout": 0, 00:21:20.011 "data_wr_pool_size": 0 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_create_subsystem", 00:21:20.011 "params": { 00:21:20.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.011 "allow_any_host": false, 00:21:20.011 "serial_number": "00000000000000000000", 00:21:20.011 "model_number": "SPDK bdev Controller", 00:21:20.011 "max_namespaces": 32, 00:21:20.011 "min_cntlid": 1, 00:21:20.011 "max_cntlid": 65519, 00:21:20.011 "ana_reporting": false 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_subsystem_add_host", 00:21:20.011 "params": { 00:21:20.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.011 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.011 "psk": "key0" 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_subsystem_add_ns", 00:21:20.011 "params": { 00:21:20.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.011 "namespace": { 00:21:20.011 "nsid": 1, 00:21:20.011 "bdev_name": "malloc0", 00:21:20.011 "nguid": "E9E1D9FB10BA4EA1B5ACFCF22B31D93B", 00:21:20.011 "uuid": "e9e1d9fb-10ba-4ea1-b5ac-fcf22b31d93b", 00:21:20.011 "no_auto_visible": false 00:21:20.011 } 00:21:20.011 } 00:21:20.011 }, 00:21:20.011 { 00:21:20.011 "method": "nvmf_subsystem_add_listener", 00:21:20.011 "params": { 00:21:20.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.011 "listen_address": { 00:21:20.011 "trtype": "TCP", 00:21:20.011 "adrfam": "IPv4", 00:21:20.011 "traddr": "10.0.0.2", 00:21:20.011 "trsvcid": "4420" 00:21:20.011 }, 00:21:20.011 "secure_channel": true 00:21:20.011 } 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 } 00:21:20.011 ] 00:21:20.011 }' 00:21:20.011 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:20.270 "subsystems": [ 00:21:20.270 { 00:21:20.270 "subsystem": "keyring", 00:21:20.270 "config": [ 00:21:20.270 { 00:21:20.270 "method": "keyring_file_add_key", 00:21:20.270 "params": { 00:21:20.270 "name": "key0", 00:21:20.270 "path": "/tmp/tmp.WUYqWABvAG" 00:21:20.270 } 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "iobuf", 00:21:20.270 "config": [ 00:21:20.270 { 00:21:20.270 "method": "iobuf_set_options", 00:21:20.270 "params": { 00:21:20.270 "small_pool_count": 8192, 00:21:20.270 "large_pool_count": 1024, 00:21:20.270 "small_bufsize": 8192, 00:21:20.270 "large_bufsize": 135168 00:21:20.270 } 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "sock", 00:21:20.270 "config": [ 00:21:20.270 { 00:21:20.270 "method": "sock_set_default_impl", 00:21:20.270 "params": { 00:21:20.270 "impl_name": "posix" 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "sock_impl_set_options", 00:21:20.270 "params": { 00:21:20.270 "impl_name": "ssl", 00:21:20.270 "recv_buf_size": 4096, 00:21:20.270 "send_buf_size": 4096, 00:21:20.270 "enable_recv_pipe": true, 00:21:20.270 "enable_quickack": false, 00:21:20.270 "enable_placement_id": 0, 00:21:20.270 "enable_zerocopy_send_server": true, 00:21:20.270 "enable_zerocopy_send_client": false, 00:21:20.270 "zerocopy_threshold": 0, 00:21:20.270 "tls_version": 0, 00:21:20.270 "enable_ktls": false 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "sock_impl_set_options", 00:21:20.270 "params": { 00:21:20.270 "impl_name": "posix", 00:21:20.270 "recv_buf_size": 2097152, 00:21:20.270 "send_buf_size": 2097152, 00:21:20.270 "enable_recv_pipe": true, 00:21:20.270 "enable_quickack": false, 00:21:20.270 "enable_placement_id": 0, 00:21:20.270 "enable_zerocopy_send_server": true, 00:21:20.270 "enable_zerocopy_send_client": false, 00:21:20.270 "zerocopy_threshold": 0, 00:21:20.270 "tls_version": 0, 00:21:20.270 "enable_ktls": false 00:21:20.270 } 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "vmd", 00:21:20.270 "config": [] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "accel", 00:21:20.270 "config": [ 00:21:20.270 { 00:21:20.270 "method": "accel_set_options", 00:21:20.270 "params": { 00:21:20.270 "small_cache_size": 128, 00:21:20.270 "large_cache_size": 16, 00:21:20.270 "task_count": 2048, 00:21:20.270 "sequence_count": 2048, 00:21:20.270 "buf_count": 2048 00:21:20.270 } 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "bdev", 00:21:20.270 "config": [ 00:21:20.270 { 00:21:20.270 "method": "bdev_set_options", 00:21:20.270 "params": { 00:21:20.270 "bdev_io_pool_size": 65535, 00:21:20.270 "bdev_io_cache_size": 256, 00:21:20.270 "bdev_auto_examine": true, 00:21:20.270 "iobuf_small_cache_size": 128, 00:21:20.270 "iobuf_large_cache_size": 16 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_raid_set_options", 00:21:20.270 "params": { 00:21:20.270 "process_window_size_kb": 1024 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_iscsi_set_options", 00:21:20.270 "params": { 00:21:20.270 "timeout_sec": 30 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_nvme_set_options", 00:21:20.270 "params": { 00:21:20.270 "action_on_timeout": "none", 00:21:20.270 "timeout_us": 0, 00:21:20.270 "timeout_admin_us": 0, 00:21:20.270 "keep_alive_timeout_ms": 10000, 00:21:20.270 "arbitration_burst": 0, 00:21:20.270 "low_priority_weight": 0, 00:21:20.270 "medium_priority_weight": 0, 00:21:20.270 "high_priority_weight": 0, 00:21:20.270 "nvme_adminq_poll_period_us": 10000, 00:21:20.270 "nvme_ioq_poll_period_us": 0, 00:21:20.270 "io_queue_requests": 512, 00:21:20.270 "delay_cmd_submit": true, 00:21:20.270 "transport_retry_count": 4, 00:21:20.270 "bdev_retry_count": 3, 00:21:20.270 "transport_ack_timeout": 0, 00:21:20.270 "ctrlr_loss_timeout_sec": 0, 00:21:20.270 "reconnect_delay_sec": 0, 00:21:20.270 "fast_io_fail_timeout_sec": 0, 00:21:20.270 "disable_auto_failback": false, 00:21:20.270 "generate_uuids": false, 00:21:20.270 "transport_tos": 0, 00:21:20.270 "nvme_error_stat": false, 00:21:20.270 "rdma_srq_size": 0, 00:21:20.270 "io_path_stat": false, 00:21:20.270 "allow_accel_sequence": false, 00:21:20.270 "rdma_max_cq_size": 0, 00:21:20.270 "rdma_cm_event_timeout_ms": 0, 00:21:20.270 "dhchap_digests": [ 00:21:20.270 "sha256", 00:21:20.270 "sha384", 00:21:20.270 "sha512" 00:21:20.270 ], 00:21:20.270 "dhchap_dhgroups": [ 00:21:20.270 "null", 00:21:20.270 "ffdhe2048", 00:21:20.270 "ffdhe3072", 00:21:20.270 "ffdhe4096", 00:21:20.270 "ffdhe6144", 00:21:20.270 "ffdhe8192" 00:21:20.270 ] 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_nvme_attach_controller", 00:21:20.270 "params": { 00:21:20.270 "name": "nvme0", 00:21:20.270 "trtype": "TCP", 00:21:20.270 "adrfam": "IPv4", 00:21:20.270 "traddr": "10.0.0.2", 00:21:20.270 "trsvcid": "4420", 00:21:20.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.270 "prchk_reftag": false, 00:21:20.270 "prchk_guard": false, 00:21:20.270 "ctrlr_loss_timeout_sec": 0, 00:21:20.270 "reconnect_delay_sec": 0, 00:21:20.270 "fast_io_fail_timeout_sec": 0, 00:21:20.270 "psk": "key0", 00:21:20.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.270 "hdgst": false, 00:21:20.270 "ddgst": false 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_nvme_set_hotplug", 00:21:20.270 "params": { 00:21:20.270 "period_us": 100000, 00:21:20.270 "enable": false 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_enable_histogram", 00:21:20.270 "params": { 00:21:20.270 "name": "nvme0n1", 00:21:20.270 "enable": true 00:21:20.270 } 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "method": "bdev_wait_for_examine" 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }, 00:21:20.270 { 00:21:20.270 "subsystem": "nbd", 00:21:20.270 "config": [] 00:21:20.270 } 00:21:20.270 ] 00:21:20.270 }' 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 642300 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 642300 ']' 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 642300 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642300 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642300' 00:21:20.270 killing process with pid 642300 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 642300 00:21:20.270 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.270 00:21:20.270 Latency(us) 00:21:20.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.270 =================================================================================================================== 00:21:20.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.270 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 642300 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 642054 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 642054 ']' 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 642054 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642054 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642054' 00:21:20.529 killing process with pid 642054 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 642054 00:21:20.529 11:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 642054 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:20.788 "subsystems": [ 00:21:20.788 { 00:21:20.788 "subsystem": "keyring", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "keyring_file_add_key", 00:21:20.788 "params": { 00:21:20.788 "name": "key0", 00:21:20.788 "path": "/tmp/tmp.WUYqWABvAG" 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "iobuf", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "iobuf_set_options", 00:21:20.788 "params": { 00:21:20.788 "small_pool_count": 8192, 00:21:20.788 "large_pool_count": 1024, 00:21:20.788 "small_bufsize": 8192, 00:21:20.788 "large_bufsize": 135168 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "sock", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "sock_set_default_impl", 00:21:20.788 "params": { 00:21:20.788 "impl_name": "posix" 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "sock_impl_set_options", 00:21:20.788 "params": { 00:21:20.788 "impl_name": "ssl", 00:21:20.788 "recv_buf_size": 4096, 00:21:20.788 "send_buf_size": 4096, 00:21:20.788 "enable_recv_pipe": true, 00:21:20.788 "enable_quickack": false, 00:21:20.788 "enable_placement_id": 0, 00:21:20.788 "enable_zerocopy_send_server": true, 00:21:20.788 "enable_zerocopy_send_client": false, 00:21:20.788 "zerocopy_threshold": 0, 00:21:20.788 "tls_version": 0, 00:21:20.788 "enable_ktls": false 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "sock_impl_set_options", 00:21:20.788 "params": { 00:21:20.788 "impl_name": "posix", 00:21:20.788 "recv_buf_size": 2097152, 00:21:20.788 "send_buf_size": 2097152, 00:21:20.788 "enable_recv_pipe": true, 00:21:20.788 "enable_quickack": false, 00:21:20.788 "enable_placement_id": 0, 00:21:20.788 "enable_zerocopy_send_server": true, 00:21:20.788 "enable_zerocopy_send_client": false, 00:21:20.788 "zerocopy_threshold": 0, 00:21:20.788 "tls_version": 0, 00:21:20.788 "enable_ktls": false 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "vmd", 00:21:20.788 "config": [] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "accel", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "accel_set_options", 00:21:20.788 "params": { 00:21:20.788 "small_cache_size": 128, 00:21:20.788 "large_cache_size": 16, 00:21:20.788 "task_count": 2048, 00:21:20.788 "sequence_count": 2048, 00:21:20.788 "buf_count": 2048 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "bdev", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "bdev_set_options", 00:21:20.788 "params": { 00:21:20.788 "bdev_io_pool_size": 65535, 00:21:20.788 "bdev_io_cache_size": 256, 00:21:20.788 "bdev_auto_examine": true, 00:21:20.788 "iobuf_small_cache_size": 128, 00:21:20.788 "iobuf_large_cache_size": 16 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_raid_set_options", 00:21:20.788 "params": { 00:21:20.788 "process_window_size_kb": 1024 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_iscsi_set_options", 00:21:20.788 "params": { 00:21:20.788 "timeout_sec": 30 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_nvme_set_options", 00:21:20.788 "params": { 00:21:20.788 "action_on_timeout": "none", 00:21:20.788 "timeout_us": 0, 00:21:20.788 "timeout_admin_us": 0, 00:21:20.788 "keep_alive_timeout_ms": 10000, 00:21:20.788 "arbitration_burst": 0, 00:21:20.788 "low_priority_weight": 0, 00:21:20.788 "medium_priority_weight": 0, 00:21:20.788 "high_priority_weight": 0, 00:21:20.788 "nvme_adminq_poll_period_us": 10000, 00:21:20.788 "nvme_ioq_poll_period_us": 0, 00:21:20.788 "io_queue_requests": 0, 00:21:20.788 "delay_cmd_submit": true, 00:21:20.788 "transport_retry_count": 4, 00:21:20.788 "bdev_retry_count": 3, 00:21:20.788 "transport_ack_timeout": 0, 00:21:20.788 "ctrlr_loss_timeout_sec": 0, 00:21:20.788 "reconnect_delay_sec": 0, 00:21:20.788 "fast_io_fail_timeout_sec": 0, 00:21:20.788 "disable_auto_failback": false, 00:21:20.788 "generate_uuids": false, 00:21:20.788 "transport_tos": 0, 00:21:20.788 "nvme_error_stat": false, 00:21:20.788 "rdma_srq_size": 0, 00:21:20.788 "io_path_stat": false, 00:21:20.788 "allow_accel_sequence": false, 00:21:20.788 "rdma_max_cq_size": 0, 00:21:20.788 "rdma_cm_event_timeout_ms": 0, 00:21:20.788 "dhchap_digests": [ 00:21:20.788 "sha256", 00:21:20.788 "sha384", 00:21:20.788 "sha512" 00:21:20.788 ], 00:21:20.788 "dhchap_dhgroups": [ 00:21:20.788 "null", 00:21:20.788 "ffdhe2048", 00:21:20.788 "ffdhe3072", 00:21:20.788 "ffdhe4096", 00:21:20.788 "ffdhe6144", 00:21:20.788 "ffdhe8192" 00:21:20.788 ] 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_nvme_set_hotplug", 00:21:20.788 "params": { 00:21:20.788 "period_us": 100000, 00:21:20.788 "enable": false 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_malloc_create", 00:21:20.788 "params": { 00:21:20.788 "name": "malloc0", 00:21:20.788 "num_blocks": 8192, 00:21:20.788 "block_size": 4096, 00:21:20.788 "physical_block_size": 4096, 00:21:20.788 "uuid": "e9e1d9fb-10ba-4ea1-b5ac-fcf22b31d93b", 00:21:20.788 "optimal_io_boundary": 0 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "bdev_wait_for_examine" 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "nbd", 00:21:20.788 "config": [] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "scheduler", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "framework_set_scheduler", 00:21:20.788 "params": { 00:21:20.788 "name": "static" 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "subsystem": "nvmf", 00:21:20.788 "config": [ 00:21:20.788 { 00:21:20.788 "method": "nvmf_set_config", 00:21:20.788 "params": { 00:21:20.788 "discovery_filter": "match_any", 00:21:20.788 "admin_cmd_passthru": { 00:21:20.788 "identify_ctrlr": false 00:21:20.788 } 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_set_max_subsystems", 00:21:20.788 "params": { 00:21:20.788 "max_subsystems": 1024 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_set_crdt", 00:21:20.788 "params": { 00:21:20.788 "crdt1": 0, 00:21:20.788 "crdt2": 0, 00:21:20.788 "crdt3": 0 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_create_transport", 00:21:20.788 "params": { 00:21:20.788 "trtype": "TCP", 00:21:20.788 "max_queue_depth": 128, 00:21:20.788 "max_io_qpairs_per_ctrlr": 127, 00:21:20.788 "in_capsule_data_size": 4096, 00:21:20.788 "max_io_size": 131072, 00:21:20.788 "io_unit_size": 131072, 00:21:20.788 "max_aq_depth": 128, 00:21:20.788 "num_shared_buffers": 511, 00:21:20.788 "buf_cache_size": 4294967295, 00:21:20.788 "dif_insert_or_strip": false, 00:21:20.788 "zcopy": false, 00:21:20.788 "c2h_success": false, 00:21:20.788 "sock_priority": 0, 00:21:20.788 "abort_timeout_sec": 1, 00:21:20.788 "ack_timeout": 0, 00:21:20.788 "data_wr_pool_size": 0 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_create_subsystem", 00:21:20.788 "params": { 00:21:20.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.788 "allow_any_host": false, 00:21:20.788 "serial_number": "00000000000000000000", 00:21:20.788 "model_number": "SPDK bdev Controller", 00:21:20.788 "max_namespaces": 32, 00:21:20.788 "min_cntlid": 1, 00:21:20.788 "max_cntlid": 65519, 00:21:20.788 "ana_reporting": false 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_subsystem_add_host", 00:21:20.788 "params": { 00:21:20.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.788 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.788 "psk": "key0" 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_subsystem_add_ns", 00:21:20.788 "params": { 00:21:20.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.788 "namespace": { 00:21:20.788 "nsid": 1, 00:21:20.788 "bdev_name": "malloc0", 00:21:20.788 "nguid": "E9E1D9FB10BA4EA1B5ACFCF22B31D93B", 00:21:20.788 "uuid": "e9e1d9fb-10ba-4ea1-b5ac-fcf22b31d93b", 00:21:20.788 "no_auto_visible": false 00:21:20.788 } 00:21:20.788 } 00:21:20.788 }, 00:21:20.788 { 00:21:20.788 "method": "nvmf_subsystem_add_listener", 00:21:20.788 "params": { 00:21:20.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.788 "listen_address": { 00:21:20.788 "trtype": "TCP", 00:21:20.788 "adrfam": "IPv4", 00:21:20.788 "traddr": "10.0.0.2", 00:21:20.788 "trsvcid": "4420" 00:21:20.788 }, 00:21:20.788 "secure_channel": true 00:21:20.788 } 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 } 00:21:20.788 ] 00:21:20.788 }' 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=642779 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 642779 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 642779 ']' 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.788 11:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.788 [2024-07-15 11:32:04.191676] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:20.788 [2024-07-15 11:32:04.191725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.788 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.788 [2024-07-15 11:32:04.259404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.788 [2024-07-15 11:32:04.335987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.788 [2024-07-15 11:32:04.336024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.788 [2024-07-15 11:32:04.336035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.788 [2024-07-15 11:32:04.336041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.788 [2024-07-15 11:32:04.336045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.788 [2024-07-15 11:32:04.336098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.046 [2024-07-15 11:32:04.548016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.046 [2024-07-15 11:32:04.580037] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.046 [2024-07-15 11:32:04.587357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=643025 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 643025 /var/tmp/bdevperf.sock 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 643025 ']' 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.612 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:21.612 "subsystems": [ 00:21:21.612 { 00:21:21.612 "subsystem": "keyring", 00:21:21.612 "config": [ 00:21:21.612 { 00:21:21.612 "method": "keyring_file_add_key", 00:21:21.612 "params": { 00:21:21.612 "name": "key0", 00:21:21.612 "path": "/tmp/tmp.WUYqWABvAG" 00:21:21.612 } 00:21:21.612 } 00:21:21.612 ] 00:21:21.612 }, 00:21:21.612 { 00:21:21.612 "subsystem": "iobuf", 00:21:21.612 "config": [ 00:21:21.612 { 00:21:21.612 "method": "iobuf_set_options", 00:21:21.612 "params": { 00:21:21.612 "small_pool_count": 8192, 00:21:21.612 "large_pool_count": 1024, 00:21:21.612 "small_bufsize": 8192, 00:21:21.612 "large_bufsize": 135168 00:21:21.612 } 00:21:21.612 } 00:21:21.612 ] 00:21:21.612 }, 00:21:21.612 { 00:21:21.612 "subsystem": "sock", 00:21:21.612 "config": [ 00:21:21.612 { 00:21:21.612 "method": "sock_set_default_impl", 00:21:21.612 "params": { 00:21:21.612 "impl_name": "posix" 00:21:21.612 } 00:21:21.612 }, 00:21:21.612 { 00:21:21.612 "method": "sock_impl_set_options", 00:21:21.612 "params": { 00:21:21.613 "impl_name": "ssl", 00:21:21.613 "recv_buf_size": 4096, 00:21:21.613 "send_buf_size": 4096, 00:21:21.613 "enable_recv_pipe": true, 00:21:21.613 "enable_quickack": false, 00:21:21.613 "enable_placement_id": 0, 00:21:21.613 "enable_zerocopy_send_server": true, 00:21:21.613 "enable_zerocopy_send_client": false, 00:21:21.613 "zerocopy_threshold": 0, 00:21:21.613 "tls_version": 0, 00:21:21.613 "enable_ktls": false 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "sock_impl_set_options", 00:21:21.613 "params": { 00:21:21.613 "impl_name": "posix", 00:21:21.613 "recv_buf_size": 2097152, 00:21:21.613 "send_buf_size": 2097152, 00:21:21.613 "enable_recv_pipe": true, 00:21:21.613 "enable_quickack": false, 00:21:21.613 "enable_placement_id": 0, 00:21:21.613 "enable_zerocopy_send_server": true, 00:21:21.613 "enable_zerocopy_send_client": false, 00:21:21.613 "zerocopy_threshold": 0, 00:21:21.613 "tls_version": 0, 00:21:21.613 "enable_ktls": false 00:21:21.613 } 00:21:21.613 } 00:21:21.613 ] 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "subsystem": "vmd", 00:21:21.613 "config": [] 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "subsystem": "accel", 00:21:21.613 "config": [ 00:21:21.613 { 00:21:21.613 "method": "accel_set_options", 00:21:21.613 "params": { 00:21:21.613 "small_cache_size": 128, 00:21:21.613 "large_cache_size": 16, 00:21:21.613 "task_count": 2048, 00:21:21.613 "sequence_count": 2048, 00:21:21.613 "buf_count": 2048 00:21:21.613 } 00:21:21.613 } 00:21:21.613 ] 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "subsystem": "bdev", 00:21:21.613 "config": [ 00:21:21.613 { 00:21:21.613 "method": "bdev_set_options", 00:21:21.613 "params": { 00:21:21.613 "bdev_io_pool_size": 65535, 00:21:21.613 "bdev_io_cache_size": 256, 00:21:21.613 "bdev_auto_examine": true, 00:21:21.613 "iobuf_small_cache_size": 128, 00:21:21.613 "iobuf_large_cache_size": 16 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_raid_set_options", 00:21:21.613 "params": { 00:21:21.613 "process_window_size_kb": 1024 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_iscsi_set_options", 00:21:21.613 "params": { 00:21:21.613 "timeout_sec": 30 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_nvme_set_options", 00:21:21.613 "params": { 00:21:21.613 "action_on_timeout": "none", 00:21:21.613 "timeout_us": 0, 00:21:21.613 "timeout_admin_us": 0, 00:21:21.613 "keep_alive_timeout_ms": 10000, 00:21:21.613 "arbitration_burst": 0, 00:21:21.613 "low_priority_weight": 0, 00:21:21.613 "medium_priority_weight": 0, 00:21:21.613 "high_priority_weight": 0, 00:21:21.613 "nvme_adminq_poll_period_us": 10000, 00:21:21.613 "nvme_ioq_poll_period_us": 0, 00:21:21.613 "io_queue_requests": 512, 00:21:21.613 "delay_cmd_submit": true, 00:21:21.613 "transport_retry_count": 4, 00:21:21.613 "bdev_retry_count": 3, 00:21:21.613 "transport_ack_timeout": 0, 00:21:21.613 "ctrlr_loss_timeout_sec": 0, 00:21:21.613 "reconnect_delay_sec": 0, 00:21:21.613 "fast_io_fail_timeout_sec": 0, 00:21:21.613 "disable_auto_failback": false, 00:21:21.613 "generate_uuids": false, 00:21:21.613 "transport_tos": 0, 00:21:21.613 "nvme_error_stat": false, 00:21:21.613 "rdma_srq_size": 0, 00:21:21.613 "io_path_stat": false, 00:21:21.613 "allow_accel_sequence": false, 00:21:21.613 "rdma_max_cq_size": 0, 00:21:21.613 "rdma_cm_event_timeout_ms": 0, 00:21:21.613 "dhchap_digests": [ 00:21:21.613 "sha256", 00:21:21.613 "sha384", 00:21:21.613 "sha512" 00:21:21.613 ], 00:21:21.613 "dhchap_dhgroups": [ 00:21:21.613 "null", 00:21:21.613 "ffdhe2048", 00:21:21.613 "ffdhe3072", 00:21:21.613 "ffdhe4096", 00:21:21.613 "ffdhe6144", 00:21:21.613 "ffdhe8192" 00:21:21.613 ] 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_nvme_attach_controller", 00:21:21.613 "params": { 00:21:21.613 "name": "nvme0", 00:21:21.613 "trtype": "TCP", 00:21:21.613 "adrfam": "IPv4", 00:21:21.613 "traddr": "10.0.0.2", 00:21:21.613 "trsvcid": "4420", 00:21:21.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.613 "prchk_reftag": false, 00:21:21.613 "prchk_guard": false, 00:21:21.613 "ctrlr_loss_timeout_sec": 0, 00:21:21.613 "reconnect_delay_sec": 0, 00:21:21.613 "fast_io_fail_timeout_sec": 0, 00:21:21.613 "psk": "key0", 00:21:21.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.613 "hdgst": false, 00:21:21.613 "ddgst": false 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_nvme_set_hotplug", 00:21:21.613 "params": { 00:21:21.613 "period_us": 100000, 00:21:21.613 "enable": false 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_enable_histogram", 00:21:21.613 "params": { 00:21:21.613 "name": "nvme0n1", 00:21:21.613 "enable": true 00:21:21.613 } 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "method": "bdev_wait_for_examine" 00:21:21.613 } 00:21:21.613 ] 00:21:21.613 }, 00:21:21.613 { 00:21:21.613 "subsystem": "nbd", 00:21:21.613 "config": [] 00:21:21.613 } 00:21:21.613 ] 00:21:21.613 }' 00:21:21.613 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.613 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.613 [2024-07-15 11:32:05.080594] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:21.613 [2024-07-15 11:32:05.080644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643025 ] 00:21:21.613 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.613 [2024-07-15 11:32:05.134772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.870 [2024-07-15 11:32:05.211975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.870 [2024-07-15 11:32:05.363555] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.435 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.435 11:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.435 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.435 11:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:22.694 11:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.694 11:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.694 Running I/O for 1 seconds... 00:21:23.628 00:21:23.628 Latency(us) 00:21:23.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.628 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:23.628 Verification LBA range: start 0x0 length 0x2000 00:21:23.628 nvme0n1 : 1.02 5294.34 20.68 0.00 0.00 23954.54 5584.81 26100.42 00:21:23.628 =================================================================================================================== 00:21:23.628 Total : 5294.34 20.68 0.00 0.00 23954.54 5584.81 26100.42 00:21:23.628 0 00:21:23.628 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:23.628 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:23.628 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:23.628 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:23.629 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:23.887 nvmf_trace.0 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 643025 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 643025 ']' 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 643025 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643025 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643025' 00:21:23.887 killing process with pid 643025 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 643025 00:21:23.887 Received shutdown signal, test time was about 1.000000 seconds 00:21:23.887 00:21:23.887 Latency(us) 00:21:23.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.887 =================================================================================================================== 00:21:23.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.887 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 643025 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.146 rmmod nvme_tcp 00:21:24.146 rmmod nvme_fabrics 00:21:24.146 rmmod nvme_keyring 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 642779 ']' 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 642779 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 642779 ']' 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 642779 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642779 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642779' 00:21:24.146 killing process with pid 642779 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 642779 00:21:24.146 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 642779 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.404 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.308 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.308 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iOBRxmzdmu /tmp/tmp.v5WC2zJplZ /tmp/tmp.WUYqWABvAG 00:21:26.308 00:21:26.308 real 1m25.468s 00:21:26.308 user 2m10.799s 00:21:26.308 sys 0m30.303s 00:21:26.308 11:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.308 11:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.308 ************************************ 00:21:26.308 END TEST nvmf_tls 00:21:26.308 ************************************ 00:21:26.567 11:32:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:26.567 11:32:09 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:26.567 11:32:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:26.567 11:32:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.567 11:32:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.567 ************************************ 00:21:26.567 START TEST nvmf_fips 00:21:26.567 ************************************ 00:21:26.568 11:32:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:26.568 * Looking for test storage... 00:21:26.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:26.568 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:26.827 Error setting digest 00:21:26.827 00D2BE4D627F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:26.827 00D2BE4D627F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.827 11:32:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.101 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:32.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:32.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:32.102 Found net devices under 0000:86:00.0: cvl_0_0 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:32.102 Found net devices under 0000:86:00.1: cvl_0_1 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.102 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:32.362 00:21:32.362 --- 10.0.0.2 ping statistics --- 00:21:32.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.362 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:32.362 00:21:32.362 --- 10.0.0.1 ping statistics --- 00:21:32.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.362 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.362 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=646827 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 646827 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 646827 ']' 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.621 11:32:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.621 [2024-07-15 11:32:16.042663] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:32.621 [2024-07-15 11:32:16.042711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.621 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.621 [2024-07-15 11:32:16.113977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.621 [2024-07-15 11:32:16.190466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.621 [2024-07-15 11:32:16.190507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.621 [2024-07-15 11:32:16.190515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.621 [2024-07-15 11:32:16.190521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.621 [2024-07-15 11:32:16.190526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.621 [2024-07-15 11:32:16.190544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.561 11:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.561 [2024-07-15 11:32:17.026665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.561 [2024-07-15 11:32:17.042664] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.561 [2024-07-15 11:32:17.042851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.561 [2024-07-15 11:32:17.071040] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:33.561 malloc0 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=647075 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 647075 /var/tmp/bdevperf.sock 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 647075 ']' 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.561 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.820 [2024-07-15 11:32:17.164562] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:33.820 [2024-07-15 11:32:17.164609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647075 ] 00:21:33.820 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.820 [2024-07-15 11:32:17.230647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.820 [2024-07-15 11:32:17.304441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.387 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.387 11:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:34.387 11:32:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:34.646 [2024-07-15 11:32:18.111279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.647 [2024-07-15 11:32:18.111362] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.647 TLSTESTn1 00:21:34.647 11:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.906 Running I/O for 10 seconds... 00:21:44.890 00:21:44.890 Latency(us) 00:21:44.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.890 Verification LBA range: start 0x0 length 0x2000 00:21:44.890 TLSTESTn1 : 10.01 5532.00 21.61 0.00 0.00 23101.76 6069.20 39663.53 00:21:44.890 =================================================================================================================== 00:21:44.890 Total : 5532.00 21.61 0.00 0.00 23101.76 6069.20 39663.53 00:21:44.890 0 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.890 nvmf_trace.0 00:21:44.890 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 647075 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 647075 ']' 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 647075 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.891 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 647075 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 647075' 00:21:45.150 killing process with pid 647075 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 647075 00:21:45.150 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.150 00:21:45.150 Latency(us) 00:21:45.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.150 =================================================================================================================== 00:21:45.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.150 [2024-07-15 11:32:28.487146] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 647075 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.150 rmmod nvme_tcp 00:21:45.150 rmmod nvme_fabrics 00:21:45.150 rmmod nvme_keyring 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 646827 ']' 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 646827 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 646827 ']' 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 646827 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.150 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 646827 00:21:45.409 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:45.409 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:45.409 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 646827' 00:21:45.409 killing process with pid 646827 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 646827 00:21:45.410 [2024-07-15 11:32:28.772072] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 646827 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.410 11:32:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.944 11:32:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.944 11:32:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.944 00:21:47.944 real 0m21.074s 00:21:47.944 user 0m23.036s 00:21:47.944 sys 0m8.891s 00:21:47.944 11:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.944 11:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.944 ************************************ 00:21:47.944 END TEST nvmf_fips 00:21:47.944 ************************************ 00:21:47.944 11:32:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.944 11:32:31 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:47.944 11:32:31 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:47.944 11:32:31 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:47.944 11:32:31 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:47.944 11:32:31 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.944 11:32:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.221 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.221 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:53.221 11:32:36 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:53.221 11:32:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:53.221 11:32:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.221 11:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.221 ************************************ 00:21:53.221 START TEST nvmf_perf_adq 00:21:53.221 ************************************ 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:53.221 * Looking for test storage... 00:21:53.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.221 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.222 11:32:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:53.222 11:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.222 11:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:58.499 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:58.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:58.499 Found net devices under 0000:86:00.0: cvl_0_0 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.499 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:58.759 Found net devices under 0000:86:00.1: cvl_0_1 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:58.759 11:32:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:59.695 11:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:01.599 11:32:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.876 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.876 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.876 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:22:06.877 00:22:06.877 --- 10.0.0.2 ping statistics --- 00:22:06.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.877 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:22:06.877 00:22:06.877 --- 10.0.0.1 ping statistics --- 00:22:06.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.877 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=656763 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 656763 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 656763 ']' 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.877 11:32:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.136 [2024-07-15 11:32:50.494678] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:07.136 [2024-07-15 11:32:50.494731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.136 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.136 [2024-07-15 11:32:50.569161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.136 [2024-07-15 11:32:50.654940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.136 [2024-07-15 11:32:50.654976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.136 [2024-07-15 11:32:50.654983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.136 [2024-07-15 11:32:50.654989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.136 [2024-07-15 11:32:50.654994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.136 [2024-07-15 11:32:50.655052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.136 [2024-07-15 11:32:50.655082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.136 [2024-07-15 11:32:50.655104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.136 [2024-07-15 11:32:50.655105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.728 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.728 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:07.728 11:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.728 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.728 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 [2024-07-15 11:32:51.488545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 Malloc1 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 [2024-07-15 11:32:51.543140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=657015 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:07.987 11:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:08.245 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.172 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:10.172 11:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.172 11:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.172 11:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.172 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:10.172 "tick_rate": 2300000000, 00:22:10.172 "poll_groups": [ 00:22:10.172 { 00:22:10.172 "name": "nvmf_tgt_poll_group_000", 00:22:10.172 "admin_qpairs": 1, 00:22:10.172 "io_qpairs": 1, 00:22:10.172 "current_admin_qpairs": 1, 00:22:10.172 "current_io_qpairs": 1, 00:22:10.172 "pending_bdev_io": 0, 00:22:10.172 "completed_nvme_io": 20818, 00:22:10.172 "transports": [ 00:22:10.172 { 00:22:10.172 "trtype": "TCP" 00:22:10.172 } 00:22:10.172 ] 00:22:10.172 }, 00:22:10.172 { 00:22:10.172 "name": "nvmf_tgt_poll_group_001", 00:22:10.172 "admin_qpairs": 0, 00:22:10.172 "io_qpairs": 1, 00:22:10.172 "current_admin_qpairs": 0, 00:22:10.172 "current_io_qpairs": 1, 00:22:10.172 "pending_bdev_io": 0, 00:22:10.172 "completed_nvme_io": 21249, 00:22:10.172 "transports": [ 00:22:10.172 { 00:22:10.172 "trtype": "TCP" 00:22:10.172 } 00:22:10.172 ] 00:22:10.172 }, 00:22:10.172 { 00:22:10.172 "name": "nvmf_tgt_poll_group_002", 00:22:10.172 "admin_qpairs": 0, 00:22:10.172 "io_qpairs": 1, 00:22:10.172 "current_admin_qpairs": 0, 00:22:10.172 "current_io_qpairs": 1, 00:22:10.172 "pending_bdev_io": 0, 00:22:10.172 "completed_nvme_io": 20903, 00:22:10.172 "transports": [ 00:22:10.172 { 00:22:10.172 "trtype": "TCP" 00:22:10.172 } 00:22:10.172 ] 00:22:10.172 }, 00:22:10.172 { 00:22:10.172 "name": "nvmf_tgt_poll_group_003", 00:22:10.172 "admin_qpairs": 0, 00:22:10.172 "io_qpairs": 1, 00:22:10.172 "current_admin_qpairs": 0, 00:22:10.172 "current_io_qpairs": 1, 00:22:10.172 "pending_bdev_io": 0, 00:22:10.172 "completed_nvme_io": 20587, 00:22:10.172 "transports": [ 00:22:10.172 { 00:22:10.172 "trtype": "TCP" 00:22:10.172 } 00:22:10.172 ] 00:22:10.172 } 00:22:10.173 ] 00:22:10.173 }' 00:22:10.173 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:10.173 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:10.173 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:10.173 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:10.173 11:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 657015 00:22:18.293 Initializing NVMe Controllers 00:22:18.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:18.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:18.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:18.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:18.293 Initialization complete. Launching workers. 00:22:18.293 ======================================================== 00:22:18.293 Latency(us) 00:22:18.293 Device Information : IOPS MiB/s Average min max 00:22:18.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10840.88 42.35 5905.53 1482.61 9667.08 00:22:18.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11168.87 43.63 5730.34 2565.81 9366.13 00:22:18.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11038.28 43.12 5798.59 1914.77 9940.91 00:22:18.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11022.08 43.05 5807.91 2734.69 9786.38 00:22:18.293 ======================================================== 00:22:18.293 Total : 44070.10 172.15 5809.93 1482.61 9940.91 00:22:18.293 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.293 rmmod nvme_tcp 00:22:18.293 rmmod nvme_fabrics 00:22:18.293 rmmod nvme_keyring 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 656763 ']' 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 656763 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 656763 ']' 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 656763 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656763 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656763' 00:22:18.293 killing process with pid 656763 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 656763 00:22:18.293 11:33:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 656763 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.553 11:33:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.091 11:33:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.091 11:33:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:21.091 11:33:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:22.029 11:33:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:23.935 11:33:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.212 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.213 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.213 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:22:29.213 00:22:29.213 --- 10.0.0.2 ping statistics --- 00:22:29.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.213 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:22:29.213 00:22:29.213 --- 10.0.0.1 ping statistics --- 00:22:29.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.213 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:29.213 net.core.busy_poll = 1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:29.213 net.core.busy_read = 1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=661317 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 661317 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 661317 ']' 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.213 11:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.473 [2024-07-15 11:33:12.806259] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:29.473 [2024-07-15 11:33:12.806304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.473 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.473 [2024-07-15 11:33:12.876559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.473 [2024-07-15 11:33:12.955128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.473 [2024-07-15 11:33:12.955162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.473 [2024-07-15 11:33:12.955169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.473 [2024-07-15 11:33:12.955174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.473 [2024-07-15 11:33:12.955179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.473 [2024-07-15 11:33:12.955234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.473 [2024-07-15 11:33:12.955335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.473 [2024-07-15 11:33:12.955369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.473 [2024-07-15 11:33:12.955369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.040 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.040 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:30.040 11:33:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.040 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.040 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 [2024-07-15 11:33:13.809208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 Malloc1 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 [2024-07-15 11:33:13.857028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=661565 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:30.300 11:33:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:30.560 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:32.466 "tick_rate": 2300000000, 00:22:32.466 "poll_groups": [ 00:22:32.466 { 00:22:32.466 "name": "nvmf_tgt_poll_group_000", 00:22:32.466 "admin_qpairs": 1, 00:22:32.466 "io_qpairs": 0, 00:22:32.466 "current_admin_qpairs": 1, 00:22:32.466 "current_io_qpairs": 0, 00:22:32.466 "pending_bdev_io": 0, 00:22:32.466 "completed_nvme_io": 0, 00:22:32.466 "transports": [ 00:22:32.466 { 00:22:32.466 "trtype": "TCP" 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "name": "nvmf_tgt_poll_group_001", 00:22:32.466 "admin_qpairs": 0, 00:22:32.466 "io_qpairs": 4, 00:22:32.466 "current_admin_qpairs": 0, 00:22:32.466 "current_io_qpairs": 4, 00:22:32.466 "pending_bdev_io": 0, 00:22:32.466 "completed_nvme_io": 44140, 00:22:32.466 "transports": [ 00:22:32.466 { 00:22:32.466 "trtype": "TCP" 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "name": "nvmf_tgt_poll_group_002", 00:22:32.466 "admin_qpairs": 0, 00:22:32.466 "io_qpairs": 0, 00:22:32.466 "current_admin_qpairs": 0, 00:22:32.466 "current_io_qpairs": 0, 00:22:32.466 "pending_bdev_io": 0, 00:22:32.466 "completed_nvme_io": 0, 00:22:32.466 "transports": [ 00:22:32.466 { 00:22:32.466 "trtype": "TCP" 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "name": "nvmf_tgt_poll_group_003", 00:22:32.466 "admin_qpairs": 0, 00:22:32.466 "io_qpairs": 0, 00:22:32.466 "current_admin_qpairs": 0, 00:22:32.466 "current_io_qpairs": 0, 00:22:32.466 "pending_bdev_io": 0, 00:22:32.466 "completed_nvme_io": 0, 00:22:32.466 "transports": [ 00:22:32.466 { 00:22:32.466 "trtype": "TCP" 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 }' 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:22:32.466 11:33:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 661565 00:22:40.582 Initializing NVMe Controllers 00:22:40.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:40.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:40.583 Initialization complete. Launching workers. 00:22:40.583 ======================================================== 00:22:40.583 Latency(us) 00:22:40.583 Device Information : IOPS MiB/s Average min max 00:22:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5868.90 22.93 10922.82 1319.05 55647.03 00:22:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6486.80 25.34 9867.90 1117.51 56029.21 00:22:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5412.80 21.14 11842.55 1366.39 57354.90 00:22:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5578.80 21.79 11474.00 1461.78 58012.37 00:22:40.583 ======================================================== 00:22:40.583 Total : 23347.30 91.20 10974.66 1117.51 58012.37 00:22:40.583 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.583 rmmod nvme_tcp 00:22:40.583 rmmod nvme_fabrics 00:22:40.583 rmmod nvme_keyring 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 661317 ']' 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 661317 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 661317 ']' 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 661317 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661317 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661317' 00:22:40.583 killing process with pid 661317 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 661317 00:22:40.583 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 661317 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.842 11:33:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.212 11:33:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.212 11:33:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:44.212 00:22:44.212 real 0m50.867s 00:22:44.212 user 2m48.916s 00:22:44.212 sys 0m9.981s 00:22:44.212 11:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.212 11:33:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.212 ************************************ 00:22:44.212 END TEST nvmf_perf_adq 00:22:44.212 ************************************ 00:22:44.212 11:33:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.212 11:33:27 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:44.212 11:33:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.212 11:33:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.212 11:33:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.212 ************************************ 00:22:44.212 START TEST nvmf_shutdown 00:22:44.212 ************************************ 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:44.212 * Looking for test storage... 00:22:44.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.212 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:44.213 ************************************ 00:22:44.213 START TEST nvmf_shutdown_tc1 00:22:44.213 ************************************ 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.213 11:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.786 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.786 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:50.786 00:22:50.786 --- 10.0.0.2 ping statistics --- 00:22:50.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.786 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:22:50.786 00:22:50.786 --- 10.0.0.1 ping statistics --- 00:22:50.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.786 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=666815 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 666815 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 666815 ']' 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.786 11:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.786 [2024-07-15 11:33:33.508485] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:50.786 [2024-07-15 11:33:33.508534] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.786 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.786 [2024-07-15 11:33:33.580171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.786 [2024-07-15 11:33:33.656101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.786 [2024-07-15 11:33:33.656146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.786 [2024-07-15 11:33:33.656153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.786 [2024-07-15 11:33:33.656159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.786 [2024-07-15 11:33:33.656164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.786 [2024-07-15 11:33:33.656278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.786 [2024-07-15 11:33:33.656388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.786 [2024-07-15 11:33:33.656470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.787 [2024-07-15 11:33:33.656471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.787 [2024-07-15 11:33:34.363256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.787 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.046 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.046 Malloc1 00:22:51.046 [2024-07-15 11:33:34.459022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.046 Malloc2 00:22:51.046 Malloc3 00:22:51.046 Malloc4 00:22:51.046 Malloc5 00:22:51.305 Malloc6 00:22:51.305 Malloc7 00:22:51.305 Malloc8 00:22:51.305 Malloc9 00:22:51.305 Malloc10 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=667089 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 667089 /var/tmp/bdevperf.sock 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 667089 ']' 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.305 { 00:22:51.305 "params": { 00:22:51.305 "name": "Nvme$subsystem", 00:22:51.305 "trtype": "$TEST_TRANSPORT", 00:22:51.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.305 "adrfam": "ipv4", 00:22:51.305 "trsvcid": "$NVMF_PORT", 00:22:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.305 "hdgst": ${hdgst:-false}, 00:22:51.305 "ddgst": ${ddgst:-false} 00:22:51.305 }, 00:22:51.305 "method": "bdev_nvme_attach_controller" 00:22:51.305 } 00:22:51.305 EOF 00:22:51.305 )") 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.305 { 00:22:51.305 "params": { 00:22:51.305 "name": "Nvme$subsystem", 00:22:51.305 "trtype": "$TEST_TRANSPORT", 00:22:51.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.305 "adrfam": "ipv4", 00:22:51.305 "trsvcid": "$NVMF_PORT", 00:22:51.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.305 "hdgst": ${hdgst:-false}, 00:22:51.305 "ddgst": ${ddgst:-false} 00:22:51.305 }, 00:22:51.305 "method": "bdev_nvme_attach_controller" 00:22:51.305 } 00:22:51.305 EOF 00:22:51.305 )") 00:22:51.305 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 [2024-07-15 11:33:34.923689] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:51.564 [2024-07-15 11:33:34.923738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.564 { 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme$subsystem", 00:22:51.564 "trtype": "$TEST_TRANSPORT", 00:22:51.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "$NVMF_PORT", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.564 "hdgst": ${hdgst:-false}, 00:22:51.564 "ddgst": ${ddgst:-false} 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 } 00:22:51.564 EOF 00:22:51.564 )") 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.564 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:51.564 11:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme1", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme2", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme3", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme4", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme5", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme6", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme7", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme8", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme9", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 },{ 00:22:51.564 "params": { 00:22:51.564 "name": "Nvme10", 00:22:51.564 "trtype": "tcp", 00:22:51.564 "traddr": "10.0.0.2", 00:22:51.564 "adrfam": "ipv4", 00:22:51.564 "trsvcid": "4420", 00:22:51.564 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.564 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.564 "hdgst": false, 00:22:51.564 "ddgst": false 00:22:51.564 }, 00:22:51.564 "method": "bdev_nvme_attach_controller" 00:22:51.564 }' 00:22:51.564 [2024-07-15 11:33:34.993043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.564 [2024-07-15 11:33:35.067005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 667089 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:52.940 11:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:53.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 667089 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 666815 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.875 { 00:22:53.875 "params": { 00:22:53.875 "name": "Nvme$subsystem", 00:22:53.875 "trtype": "$TEST_TRANSPORT", 00:22:53.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.875 "adrfam": "ipv4", 00:22:53.875 "trsvcid": "$NVMF_PORT", 00:22:53.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.875 "hdgst": ${hdgst:-false}, 00:22:53.875 "ddgst": ${ddgst:-false} 00:22:53.875 }, 00:22:53.875 "method": "bdev_nvme_attach_controller" 00:22:53.875 } 00:22:53.875 EOF 00:22:53.875 )") 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.875 { 00:22:53.875 "params": { 00:22:53.875 "name": "Nvme$subsystem", 00:22:53.875 "trtype": "$TEST_TRANSPORT", 00:22:53.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.875 "adrfam": "ipv4", 00:22:53.875 "trsvcid": "$NVMF_PORT", 00:22:53.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.875 "hdgst": ${hdgst:-false}, 00:22:53.875 "ddgst": ${ddgst:-false} 00:22:53.875 }, 00:22:53.875 "method": "bdev_nvme_attach_controller" 00:22:53.875 } 00:22:53.875 EOF 00:22:53.875 )") 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.875 { 00:22:53.875 "params": { 00:22:53.875 "name": "Nvme$subsystem", 00:22:53.875 "trtype": "$TEST_TRANSPORT", 00:22:53.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.875 "adrfam": "ipv4", 00:22:53.875 "trsvcid": "$NVMF_PORT", 00:22:53.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.875 "hdgst": ${hdgst:-false}, 00:22:53.875 "ddgst": ${ddgst:-false} 00:22:53.875 }, 00:22:53.875 "method": "bdev_nvme_attach_controller" 00:22:53.875 } 00:22:53.875 EOF 00:22:53.875 )") 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.875 { 00:22:53.875 "params": { 00:22:53.875 "name": "Nvme$subsystem", 00:22:53.875 "trtype": "$TEST_TRANSPORT", 00:22:53.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.875 "adrfam": "ipv4", 00:22:53.875 "trsvcid": "$NVMF_PORT", 00:22:53.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.875 "hdgst": ${hdgst:-false}, 00:22:53.875 "ddgst": ${ddgst:-false} 00:22:53.875 }, 00:22:53.875 "method": "bdev_nvme_attach_controller" 00:22:53.875 } 00:22:53.875 EOF 00:22:53.875 )") 00:22:53.875 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.134 { 00:22:54.134 "params": { 00:22:54.134 "name": "Nvme$subsystem", 00:22:54.134 "trtype": "$TEST_TRANSPORT", 00:22:54.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.134 "adrfam": "ipv4", 00:22:54.134 "trsvcid": "$NVMF_PORT", 00:22:54.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.134 "hdgst": ${hdgst:-false}, 00:22:54.134 "ddgst": ${ddgst:-false} 00:22:54.134 }, 00:22:54.134 "method": "bdev_nvme_attach_controller" 00:22:54.134 } 00:22:54.134 EOF 00:22:54.134 )") 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.134 { 00:22:54.134 "params": { 00:22:54.134 "name": "Nvme$subsystem", 00:22:54.134 "trtype": "$TEST_TRANSPORT", 00:22:54.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.134 "adrfam": "ipv4", 00:22:54.134 "trsvcid": "$NVMF_PORT", 00:22:54.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.134 "hdgst": ${hdgst:-false}, 00:22:54.134 "ddgst": ${ddgst:-false} 00:22:54.134 }, 00:22:54.134 "method": "bdev_nvme_attach_controller" 00:22:54.134 } 00:22:54.134 EOF 00:22:54.134 )") 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.134 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.134 { 00:22:54.134 "params": { 00:22:54.134 "name": "Nvme$subsystem", 00:22:54.134 "trtype": "$TEST_TRANSPORT", 00:22:54.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.134 "adrfam": "ipv4", 00:22:54.134 "trsvcid": "$NVMF_PORT", 00:22:54.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.134 "hdgst": ${hdgst:-false}, 00:22:54.135 "ddgst": ${ddgst:-false} 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 } 00:22:54.135 EOF 00:22:54.135 )") 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.135 [2024-07-15 11:33:37.484720] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:54.135 [2024-07-15 11:33:37.484770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667568 ] 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.135 { 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme$subsystem", 00:22:54.135 "trtype": "$TEST_TRANSPORT", 00:22:54.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "$NVMF_PORT", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.135 "hdgst": ${hdgst:-false}, 00:22:54.135 "ddgst": ${ddgst:-false} 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 } 00:22:54.135 EOF 00:22:54.135 )") 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.135 { 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme$subsystem", 00:22:54.135 "trtype": "$TEST_TRANSPORT", 00:22:54.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "$NVMF_PORT", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.135 "hdgst": ${hdgst:-false}, 00:22:54.135 "ddgst": ${ddgst:-false} 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 } 00:22:54.135 EOF 00:22:54.135 )") 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.135 { 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme$subsystem", 00:22:54.135 "trtype": "$TEST_TRANSPORT", 00:22:54.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "$NVMF_PORT", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.135 "hdgst": ${hdgst:-false}, 00:22:54.135 "ddgst": ${ddgst:-false} 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 } 00:22:54.135 EOF 00:22:54.135 )") 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:54.135 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.135 11:33:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme1", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme2", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme3", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme4", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme5", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme6", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme7", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme8", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme9", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 },{ 00:22:54.135 "params": { 00:22:54.135 "name": "Nvme10", 00:22:54.135 "trtype": "tcp", 00:22:54.135 "traddr": "10.0.0.2", 00:22:54.135 "adrfam": "ipv4", 00:22:54.135 "trsvcid": "4420", 00:22:54.135 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.135 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.135 "hdgst": false, 00:22:54.135 "ddgst": false 00:22:54.135 }, 00:22:54.135 "method": "bdev_nvme_attach_controller" 00:22:54.135 }' 00:22:54.135 [2024-07-15 11:33:37.552882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.135 [2024-07-15 11:33:37.626987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.036 Running I/O for 1 seconds... 00:22:56.972 00:22:56.972 Latency(us) 00:22:56.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.972 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.972 Verification LBA range: start 0x0 length 0x400 00:22:56.972 Nvme1n1 : 1.03 254.61 15.91 0.00 0.00 243532.80 6724.56 217921.45 00:22:56.972 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.972 Verification LBA range: start 0x0 length 0x400 00:22:56.972 Nvme2n1 : 1.06 241.75 15.11 0.00 0.00 257767.74 18236.10 225215.89 00:22:56.972 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme3n1 : 1.07 305.03 19.06 0.00 0.00 200623.93 6952.51 209715.20 00:22:56.973 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme4n1 : 1.11 288.07 18.00 0.00 0.00 210725.49 17780.20 216097.84 00:22:56.973 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme5n1 : 1.13 287.01 17.94 0.00 0.00 208161.62 2208.28 216097.84 00:22:56.973 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme6n1 : 1.13 287.27 17.95 0.00 0.00 204817.98 2393.49 217009.64 00:22:56.973 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme7n1 : 1.12 286.95 17.93 0.00 0.00 202015.97 15500.69 217009.64 00:22:56.973 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme8n1 : 1.12 289.09 18.07 0.00 0.00 197307.38 1467.44 197861.73 00:22:56.973 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme9n1 : 1.13 282.49 17.66 0.00 0.00 199312.96 16640.45 227039.50 00:22:56.973 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.973 Verification LBA range: start 0x0 length 0x400 00:22:56.973 Nvme10n1 : 1.17 328.75 20.55 0.00 0.00 169322.22 5898.24 248011.02 00:22:56.973 =================================================================================================================== 00:22:56.973 Total : 2851.04 178.19 0.00 0.00 206903.15 1467.44 248011.02 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.973 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.973 rmmod nvme_tcp 00:22:56.973 rmmod nvme_fabrics 00:22:57.231 rmmod nvme_keyring 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 666815 ']' 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 666815 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 666815 ']' 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 666815 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 666815 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 666815' 00:22:57.231 killing process with pid 666815 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 666815 00:22:57.231 11:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 666815 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.490 11:33:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.027 00:23:00.027 real 0m15.432s 00:23:00.027 user 0m35.040s 00:23:00.027 sys 0m5.695s 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.027 ************************************ 00:23:00.027 END TEST nvmf_shutdown_tc1 00:23:00.027 ************************************ 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.027 ************************************ 00:23:00.027 START TEST nvmf_shutdown_tc2 00:23:00.027 ************************************ 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.027 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.028 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.028 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:00.028 00:23:00.028 --- 10.0.0.2 ping statistics --- 00:23:00.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.028 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:23:00.028 00:23:00.028 --- 10.0.0.1 ping statistics --- 00:23:00.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.028 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=668642 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 668642 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 668642 ']' 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.028 11:33:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.028 [2024-07-15 11:33:43.532499] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:00.028 [2024-07-15 11:33:43.532543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.028 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.028 [2024-07-15 11:33:43.604202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.287 [2024-07-15 11:33:43.679243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.287 [2024-07-15 11:33:43.679305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.287 [2024-07-15 11:33:43.679312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.287 [2024-07-15 11:33:43.679318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.287 [2024-07-15 11:33:43.679322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.287 [2024-07-15 11:33:43.679436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.287 [2024-07-15 11:33:43.679542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.287 [2024-07-15 11:33:43.679647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.287 [2024-07-15 11:33:43.679649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.855 [2024-07-15 11:33:44.377035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.855 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.856 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.114 Malloc1 00:23:01.114 [2024-07-15 11:33:44.468795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.114 Malloc2 00:23:01.114 Malloc3 00:23:01.114 Malloc4 00:23:01.114 Malloc5 00:23:01.114 Malloc6 00:23:01.114 Malloc7 00:23:01.374 Malloc8 00:23:01.374 Malloc9 00:23:01.374 Malloc10 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=668924 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 668924 /var/tmp/bdevperf.sock 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 668924 ']' 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.374 )") 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.374 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.374 { 00:23:01.374 "params": { 00:23:01.374 "name": "Nvme$subsystem", 00:23:01.374 "trtype": "$TEST_TRANSPORT", 00:23:01.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.374 "adrfam": "ipv4", 00:23:01.374 "trsvcid": "$NVMF_PORT", 00:23:01.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.374 "hdgst": ${hdgst:-false}, 00:23:01.374 "ddgst": ${ddgst:-false} 00:23:01.374 }, 00:23:01.374 "method": "bdev_nvme_attach_controller" 00:23:01.374 } 00:23:01.374 EOF 00:23:01.375 )") 00:23:01.375 [2024-07-15 11:33:44.938078] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:01.375 [2024-07-15 11:33:44.938128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668924 ] 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.375 { 00:23:01.375 "params": { 00:23:01.375 "name": "Nvme$subsystem", 00:23:01.375 "trtype": "$TEST_TRANSPORT", 00:23:01.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.375 "adrfam": "ipv4", 00:23:01.375 "trsvcid": "$NVMF_PORT", 00:23:01.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.375 "hdgst": ${hdgst:-false}, 00:23:01.375 "ddgst": ${ddgst:-false} 00:23:01.375 }, 00:23:01.375 "method": "bdev_nvme_attach_controller" 00:23:01.375 } 00:23:01.375 EOF 00:23:01.375 )") 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.375 { 00:23:01.375 "params": { 00:23:01.375 "name": "Nvme$subsystem", 00:23:01.375 "trtype": "$TEST_TRANSPORT", 00:23:01.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.375 "adrfam": "ipv4", 00:23:01.375 "trsvcid": "$NVMF_PORT", 00:23:01.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.375 "hdgst": ${hdgst:-false}, 00:23:01.375 "ddgst": ${ddgst:-false} 00:23:01.375 }, 00:23:01.375 "method": "bdev_nvme_attach_controller" 00:23:01.375 } 00:23:01.375 EOF 00:23:01.375 )") 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.375 { 00:23:01.375 "params": { 00:23:01.375 "name": "Nvme$subsystem", 00:23:01.375 "trtype": "$TEST_TRANSPORT", 00:23:01.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.375 "adrfam": "ipv4", 00:23:01.375 "trsvcid": "$NVMF_PORT", 00:23:01.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.375 "hdgst": ${hdgst:-false}, 00:23:01.375 "ddgst": ${ddgst:-false} 00:23:01.375 }, 00:23:01.375 "method": "bdev_nvme_attach_controller" 00:23:01.375 } 00:23:01.375 EOF 00:23:01.375 )") 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.375 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:01.375 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.634 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.634 11:33:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme1", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme2", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme3", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme4", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme5", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme6", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme7", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme8", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme9", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.634 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.634 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.634 "hdgst": false, 00:23:01.634 "ddgst": false 00:23:01.634 }, 00:23:01.634 "method": "bdev_nvme_attach_controller" 00:23:01.634 },{ 00:23:01.634 "params": { 00:23:01.634 "name": "Nvme10", 00:23:01.634 "trtype": "tcp", 00:23:01.634 "traddr": "10.0.0.2", 00:23:01.634 "adrfam": "ipv4", 00:23:01.634 "trsvcid": "4420", 00:23:01.635 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.635 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.635 "hdgst": false, 00:23:01.635 "ddgst": false 00:23:01.635 }, 00:23:01.635 "method": "bdev_nvme_attach_controller" 00:23:01.635 }' 00:23:01.635 [2024-07-15 11:33:45.008052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.635 [2024-07-15 11:33:45.083503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.537 Running I/O for 10 seconds... 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:03.537 11:33:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:23:03.795 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=200 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 200 -ge 100 ']' 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 668924 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 668924 ']' 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 668924 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668924 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668924' 00:23:04.055 killing process with pid 668924 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 668924 00:23:04.055 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 668924 00:23:04.315 Received shutdown signal, test time was about 0.914069 seconds 00:23:04.315 00:23:04.315 Latency(us) 00:23:04.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.315 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme1n1 : 0.90 290.10 18.13 0.00 0.00 217278.91 4217.10 204244.37 00:23:04.315 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme2n1 : 0.91 280.28 17.52 0.00 0.00 221924.62 17096.35 240716.58 00:23:04.315 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme3n1 : 0.89 286.71 17.92 0.00 0.00 212579.84 14531.90 242540.19 00:23:04.315 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme4n1 : 0.90 283.61 17.73 0.00 0.00 211229.83 18350.08 221568.67 00:23:04.315 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme5n1 : 0.88 218.41 13.65 0.00 0.00 268798.52 33736.79 235245.75 00:23:04.315 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme6n1 : 0.91 287.97 18.00 0.00 0.00 199720.30 1823.61 237069.36 00:23:04.315 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme7n1 : 0.91 281.86 17.62 0.00 0.00 200894.78 16412.49 269894.34 00:23:04.315 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme8n1 : 0.88 226.30 14.14 0.00 0.00 241615.92 6781.55 238892.97 00:23:04.315 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme9n1 : 0.89 215.61 13.48 0.00 0.00 251626.48 23706.94 238892.97 00:23:04.315 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.315 Verification LBA range: start 0x0 length 0x400 00:23:04.315 Nvme10n1 : 0.90 219.73 13.73 0.00 0.00 239533.00 8605.16 266247.12 00:23:04.315 =================================================================================================================== 00:23:04.315 Total : 2590.59 161.91 0.00 0.00 223896.27 1823.61 269894.34 00:23:04.315 11:33:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.690 rmmod nvme_tcp 00:23:05.690 rmmod nvme_fabrics 00:23:05.690 rmmod nvme_keyring 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 668642 ']' 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 668642 ']' 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668642' 00:23:05.690 killing process with pid 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 668642 00:23:05.690 11:33:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 668642 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.951 11:33:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.939 00:23:07.939 real 0m8.264s 00:23:07.939 user 0m25.534s 00:23:07.939 sys 0m1.333s 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.939 ************************************ 00:23:07.939 END TEST nvmf_shutdown_tc2 00:23:07.939 ************************************ 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.939 ************************************ 00:23:07.939 START TEST nvmf_shutdown_tc3 00:23:07.939 ************************************ 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.939 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.940 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.940 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.940 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:08.200 00:23:08.200 --- 10.0.0.2 ping statistics --- 00:23:08.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.200 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:23:08.200 00:23:08.200 --- 10.0.0.1 ping statistics --- 00:23:08.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.200 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.200 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=670144 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 670144 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 670144 ']' 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.459 11:33:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.459 [2024-07-15 11:33:51.874845] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:08.459 [2024-07-15 11:33:51.874899] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.459 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.459 [2024-07-15 11:33:51.945639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.459 [2024-07-15 11:33:52.023945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.459 [2024-07-15 11:33:52.023985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.459 [2024-07-15 11:33:52.023991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.459 [2024-07-15 11:33:52.023997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.459 [2024-07-15 11:33:52.024002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.459 [2024-07-15 11:33:52.024113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.459 [2024-07-15 11:33:52.024271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.459 [2024-07-15 11:33:52.024333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.459 [2024-07-15 11:33:52.024333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 [2024-07-15 11:33:52.721247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.395 11:33:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.395 Malloc1 00:23:09.395 [2024-07-15 11:33:52.817292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.395 Malloc2 00:23:09.395 Malloc3 00:23:09.395 Malloc4 00:23:09.395 Malloc5 00:23:09.654 Malloc6 00:23:09.654 Malloc7 00:23:09.654 Malloc8 00:23:09.654 Malloc9 00:23:09.654 Malloc10 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=670417 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 670417 /var/tmp/bdevperf.sock 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 670417 ']' 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.654 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.914 { 00:23:09.914 "params": { 00:23:09.914 "name": "Nvme$subsystem", 00:23:09.914 "trtype": "$TEST_TRANSPORT", 00:23:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.914 "adrfam": "ipv4", 00:23:09.914 "trsvcid": "$NVMF_PORT", 00:23:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.914 "hdgst": ${hdgst:-false}, 00:23:09.914 "ddgst": ${ddgst:-false} 00:23:09.914 }, 00:23:09.914 "method": "bdev_nvme_attach_controller" 00:23:09.914 } 00:23:09.914 EOF 00:23:09.914 )") 00:23:09.914 [2024-07-15 11:33:53.288198] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:09.914 [2024-07-15 11:33:53.288250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670417 ] 00:23:09.914 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.915 { 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme$subsystem", 00:23:09.915 "trtype": "$TEST_TRANSPORT", 00:23:09.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "$NVMF_PORT", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.915 "hdgst": ${hdgst:-false}, 00:23:09.915 "ddgst": ${ddgst:-false} 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 } 00:23:09.915 EOF 00:23:09.915 )") 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.915 { 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme$subsystem", 00:23:09.915 "trtype": "$TEST_TRANSPORT", 00:23:09.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "$NVMF_PORT", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.915 "hdgst": ${hdgst:-false}, 00:23:09.915 "ddgst": ${ddgst:-false} 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 } 00:23:09.915 EOF 00:23:09.915 )") 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.915 { 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme$subsystem", 00:23:09.915 "trtype": "$TEST_TRANSPORT", 00:23:09.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "$NVMF_PORT", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.915 "hdgst": ${hdgst:-false}, 00:23:09.915 "ddgst": ${ddgst:-false} 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 } 00:23:09.915 EOF 00:23:09.915 )") 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.915 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.915 11:33:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme1", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme2", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme3", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme4", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme5", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme6", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme7", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme8", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme9", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 },{ 00:23:09.915 "params": { 00:23:09.915 "name": "Nvme10", 00:23:09.915 "trtype": "tcp", 00:23:09.915 "traddr": "10.0.0.2", 00:23:09.915 "adrfam": "ipv4", 00:23:09.915 "trsvcid": "4420", 00:23:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.915 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.915 "hdgst": false, 00:23:09.915 "ddgst": false 00:23:09.915 }, 00:23:09.915 "method": "bdev_nvme_attach_controller" 00:23:09.915 }' 00:23:09.915 [2024-07-15 11:33:53.354519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.915 [2024-07-15 11:33:53.427907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.819 Running I/O for 10 seconds... 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:11.819 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:12.079 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 670144 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 670144 ']' 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 670144 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 670144 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 670144' 00:23:12.347 killing process with pid 670144 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 670144 00:23:12.347 11:33:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 670144 00:23:12.347 [2024-07-15 11:33:55.890422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.890514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae430 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.891993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.347 [2024-07-15 11:33:55.892119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.892231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1191a00 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.893778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae8d0 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.348 [2024-07-15 11:33:55.895146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.895498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaed70 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.349 [2024-07-15 11:33:55.896816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.896973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf230 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.897993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.898094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf6f0 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.899638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.899651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.899658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.350 [2024-07-15 11:33:55.899664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.899998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0030 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.351 [2024-07-15 11:33:55.900875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.900998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb04d0 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0970 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0970 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.901738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0970 is same with the state(5) to be set 00:23:12.352 [2024-07-15 11:33:55.905610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.352 [2024-07-15 11:33:55.905948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.352 [2024-07-15 11:33:55.905956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.905964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.905971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.905978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.905986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.905992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.353 [2024-07-15 11:33:55.906605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.353 [2024-07-15 11:33:55.906613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.906621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.906648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.354 [2024-07-15 11:33:55.906701] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1592910 was disconnected and freed. reset controller. 00:23:12.354 [2024-07-15 11:33:55.907324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.354 [2024-07-15 11:33:55.907930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.354 [2024-07-15 11:33:55.907938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.907946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.907952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.907960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.907975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.907984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.907992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.907998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.355 [2024-07-15 11:33:55.908304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.355 [2024-07-15 11:33:55.908312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.356 [2024-07-15 11:33:55.908327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.356 [2024-07-15 11:33:55.908415] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1579a60 was disconnected and freed. reset controller. 00:23:12.356 [2024-07-15 11:33:55.908523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450c70 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.908608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.908661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.908667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f340 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.908690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1494b30 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.917431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1497bf0 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.917517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16058b0 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.917605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161c0d0 is same with the state(5) to be set 00:23:12.356 [2024-07-15 11:33:55.917685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.356 [2024-07-15 11:33:55.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.356 [2024-07-15 11:33:55.917746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473190 is same with the state(5) to be set 00:23:12.357 [2024-07-15 11:33:55.917770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625050 is same with the state(5) to be set 00:23:12.357 [2024-07-15 11:33:55.917853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161c8d0 is same with the state(5) to be set 00:23:12.357 [2024-07-15 11:33:55.917936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.357 [2024-07-15 11:33:55.917988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.917995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148d1d0 is same with the state(5) to be set 00:23:12.357 [2024-07-15 11:33:55.918028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.357 [2024-07-15 11:33:55.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.357 [2024-07-15 11:33:55.918487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.918986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.918993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.919002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.919008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.919017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.919025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.919033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.919042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.919109] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1591ea0 was disconnected and freed. reset controller. 00:23:12.358 [2024-07-15 11:33:55.920271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.920297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.920309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.920326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.920333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.920342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.920349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.920358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.358 [2024-07-15 11:33:55.920365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.358 [2024-07-15 11:33:55.920373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.920988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.920995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.359 [2024-07-15 11:33:55.921003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.359 [2024-07-15 11:33:55.921009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.360 [2024-07-15 11:33:55.921303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.360 [2024-07-15 11:33:55.921384] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1593de0 was disconnected and freed. reset controller. 00:23:12.360 [2024-07-15 11:33:55.922409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:12.360 [2024-07-15 11:33:55.922434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494b30 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450c70 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9f340 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1497bf0 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16058b0 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c0d0 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1473190 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1625050 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c8d0 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.922563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d1d0 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.925186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:12.360 [2024-07-15 11:33:55.925214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.360 [2024-07-15 11:33:55.926119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:12.360 [2024-07-15 11:33:55.926346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.360 [2024-07-15 11:33:55.926364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1494b30 with addr=10.0.0.2, port=4420 00:23:12.360 [2024-07-15 11:33:55.926375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1494b30 is same with the state(5) to be set 00:23:12.360 [2024-07-15 11:33:55.926526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.360 [2024-07-15 11:33:55.926539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161c0d0 with addr=10.0.0.2, port=4420 00:23:12.360 [2024-07-15 11:33:55.926548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161c0d0 is same with the state(5) to be set 00:23:12.360 [2024-07-15 11:33:55.926715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.360 [2024-07-15 11:33:55.926738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1450c70 with addr=10.0.0.2, port=4420 00:23:12.360 [2024-07-15 11:33:55.926748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450c70 is same with the state(5) to be set 00:23:12.360 [2024-07-15 11:33:55.927074] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927129] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927179] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927234] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927564] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927624] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.360 [2024-07-15 11:33:55.927758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.360 [2024-07-15 11:33:55.927781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1473190 with addr=10.0.0.2, port=4420 00:23:12.360 [2024-07-15 11:33:55.927790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473190 is same with the state(5) to be set 00:23:12.360 [2024-07-15 11:33:55.927802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494b30 (9): Bad file descriptor 00:23:12.360 [2024-07-15 11:33:55.927814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c0d0 (9): Bad file descriptor 00:23:12.361 [2024-07-15 11:33:55.927825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450c70 (9): Bad file descriptor 00:23:12.361 [2024-07-15 11:33:55.927943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1473190 (9): Bad file descriptor 00:23:12.361 [2024-07-15 11:33:55.927958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:12.361 [2024-07-15 11:33:55.927967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:12.361 [2024-07-15 11:33:55.927977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:12.361 [2024-07-15 11:33:55.927991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:12.361 [2024-07-15 11:33:55.927999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:12.361 [2024-07-15 11:33:55.928007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:12.361 [2024-07-15 11:33:55.928021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.361 [2024-07-15 11:33:55.928030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.361 [2024-07-15 11:33:55.928038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.361 [2024-07-15 11:33:55.928095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.361 [2024-07-15 11:33:55.928106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.361 [2024-07-15 11:33:55.928113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.361 [2024-07-15 11:33:55.928122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:12.361 [2024-07-15 11:33:55.928129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:12.361 [2024-07-15 11:33:55.928138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:12.361 [2024-07-15 11:33:55.928177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.628 [2024-07-15 11:33:55.932565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.932985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.932992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.628 [2024-07-15 11:33:55.933288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.628 [2024-07-15 11:33:55.933297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.933647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.933655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518490 is same with the state(5) to be set 00:23:12.629 [2024-07-15 11:33:55.934675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.629 [2024-07-15 11:33:55.934976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.629 [2024-07-15 11:33:55.934983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.934992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.934999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.630 [2024-07-15 11:33:55.935640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.630 [2024-07-15 11:33:55.935650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.935657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.935665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.935673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.935683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.935690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.935705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.935713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1519920 is same with the state(5) to be set 00:23:12.631 [2024-07-15 11:33:55.936764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.936985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.936992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.631 [2024-07-15 11:33:55.937384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.631 [2024-07-15 11:33:55.937391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.937783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.937791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ab70 is same with the state(5) to be set 00:23:12.632 [2024-07-15 11:33:55.938796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.632 [2024-07-15 11:33:55.938908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.632 [2024-07-15 11:33:55.938917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.938923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.938934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.938942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.938950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.938958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.938965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.938973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.938982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.938998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.633 [2024-07-15 11:33:55.939485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.633 [2024-07-15 11:33:55.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.939825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.939833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c040 is same with the state(5) to be set 00:23:12.634 [2024-07-15 11:33:55.940842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.940992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.940999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.634 [2024-07-15 11:33:55.941137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.634 [2024-07-15 11:33:55.941144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.635 [2024-07-15 11:33:55.941766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.635 [2024-07-15 11:33:55.941774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.941860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.941868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15952b0 is same with the state(5) to be set 00:23:12.636 [2024-07-15 11:33:55.943890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.943911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.943923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.943931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.943941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.943948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.943957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.943964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.943974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.943992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.636 [2024-07-15 11:33:55.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.636 [2024-07-15 11:33:55.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.637 [2024-07-15 11:33:55.944767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.637 [2024-07-15 11:33:55.944775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.638 [2024-07-15 11:33:55.944926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.638 [2024-07-15 11:33:55.944935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157aef0 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.946208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:12.638 [2024-07-15 11:33:55.946235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:12.638 [2024-07-15 11:33:55.946245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:12.638 [2024-07-15 11:33:55.946255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:12.638 [2024-07-15 11:33:55.946323] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.638 [2024-07-15 11:33:55.946336] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.638 [2024-07-15 11:33:55.946404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:12.638 task offset: 27648 on job bdev=Nvme6n1 fails 00:23:12.638 00:23:12.638 Latency(us) 00:23:12.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.638 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme1n1 ended in about 0.89 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme1n1 : 0.89 215.99 13.50 72.00 0.00 219940.29 17666.23 213362.42 00:23:12.638 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme2n1 ended in about 0.90 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme2n1 : 0.90 213.33 13.33 71.11 0.00 218746.66 15728.64 217921.45 00:23:12.638 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme3n1 ended in about 0.90 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme3n1 : 0.90 212.85 13.30 70.95 0.00 215274.41 16184.54 213362.42 00:23:12.638 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme4n1 ended in about 0.90 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme4n1 : 0.90 212.36 13.27 70.79 0.00 211851.58 13164.19 217009.64 00:23:12.638 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme5n1 ended in about 0.91 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme5n1 : 0.91 211.88 13.24 70.63 0.00 208359.07 19261.89 196949.93 00:23:12.638 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme6n1 ended in about 0.89 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme6n1 : 0.89 216.78 13.55 72.26 0.00 199366.23 13962.02 217009.64 00:23:12.638 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme7n1 ended in about 0.89 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme7n1 : 0.89 215.65 13.48 71.88 0.00 196546.23 13791.05 217009.64 00:23:12.638 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme8n1 ended in about 0.91 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme8n1 : 0.91 211.41 13.21 70.47 0.00 197097.74 16982.37 222480.47 00:23:12.638 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme9n1 ended in about 0.89 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme9n1 : 0.89 216.27 13.52 72.09 0.00 188026.21 17894.18 222480.47 00:23:12.638 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.638 Job: Nvme10n1 ended in about 0.91 seconds with error 00:23:12.638 Verification LBA range: start 0x0 length 0x400 00:23:12.638 Nvme10n1 : 0.91 140.47 8.78 70.23 0.00 253282.69 18919.96 238892.97 00:23:12.638 =================================================================================================================== 00:23:12.638 Total : 2067.01 129.19 712.41 0.00 209761.07 13164.19 238892.97 00:23:12.638 [2024-07-15 11:33:55.969430] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:12.638 [2024-07-15 11:33:55.969467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:12.638 [2024-07-15 11:33:55.969827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.969846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161c8d0 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.969856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161c8d0 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.969940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.969952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1625050 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.969960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625050 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.970090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.970102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148d1d0 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.970109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148d1d0 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.970258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.970271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9f340 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.970279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f340 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.971661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.638 [2024-07-15 11:33:55.971678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:12.638 [2024-07-15 11:33:55.971688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:12.638 [2024-07-15 11:33:55.971697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:12.638 [2024-07-15 11:33:55.971986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.972000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1497bf0 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.972009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1497bf0 is same with the state(5) to be set 00:23:12.638 [2024-07-15 11:33:55.972189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.638 [2024-07-15 11:33:55.972201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16058b0 with addr=10.0.0.2, port=4420 00:23:12.638 [2024-07-15 11:33:55.972208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16058b0 is same with the state(5) to be set 00:23:12.639 [2024-07-15 11:33:55.972220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c8d0 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.972234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1625050 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.972244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d1d0 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.972253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9f340 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.972284] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.639 [2024-07-15 11:33:55.972296] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.639 [2024-07-15 11:33:55.972307] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.639 [2024-07-15 11:33:55.972319] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.639 [2024-07-15 11:33:55.972482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.639 [2024-07-15 11:33:55.972495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1450c70 with addr=10.0.0.2, port=4420 00:23:12.639 [2024-07-15 11:33:55.972503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450c70 is same with the state(5) to be set 00:23:12.639 [2024-07-15 11:33:55.972662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.639 [2024-07-15 11:33:55.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161c0d0 with addr=10.0.0.2, port=4420 00:23:12.639 [2024-07-15 11:33:55.972682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161c0d0 is same with the state(5) to be set 00:23:12.639 [2024-07-15 11:33:55.972891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.639 [2024-07-15 11:33:55.972903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1494b30 with addr=10.0.0.2, port=4420 00:23:12.639 [2024-07-15 11:33:55.972911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1494b30 is same with the state(5) to be set 00:23:12.639 [2024-07-15 11:33:55.973063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.639 [2024-07-15 11:33:55.973075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1473190 with addr=10.0.0.2, port=4420 00:23:12.639 [2024-07-15 11:33:55.973083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473190 is same with the state(5) to be set 00:23:12.639 [2024-07-15 11:33:55.973091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1497bf0 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16058b0 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450c70 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161c0d0 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494b30 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1473190 (9): Bad file descriptor 00:23:12.639 [2024-07-15 11:33:55.973334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:12.639 [2024-07-15 11:33:55.973479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:12.639 [2024-07-15 11:33:55.973486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:12.639 [2024-07-15 11:33:55.973511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.639 [2024-07-15 11:33:55.973530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.898 11:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:12.898 11:33:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 670417 00:23:13.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (670417) - No such process 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.836 rmmod nvme_tcp 00:23:13.836 rmmod nvme_fabrics 00:23:13.836 rmmod nvme_keyring 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.836 11:33:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.371 00:23:16.371 real 0m7.967s 00:23:16.371 user 0m19.823s 00:23:16.371 sys 0m1.367s 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.371 ************************************ 00:23:16.371 END TEST nvmf_shutdown_tc3 00:23:16.371 ************************************ 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:16.371 00:23:16.371 real 0m32.005s 00:23:16.371 user 1m20.543s 00:23:16.371 sys 0m8.614s 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.371 11:33:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.371 ************************************ 00:23:16.371 END TEST nvmf_shutdown 00:23:16.371 ************************************ 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:16.371 11:33:59 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.371 11:33:59 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.371 11:33:59 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:16.371 11:33:59 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.371 11:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.371 ************************************ 00:23:16.371 START TEST nvmf_multicontroller 00:23:16.371 ************************************ 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.371 * Looking for test storage... 00:23:16.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.371 11:33:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.372 11:33:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.649 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:21.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:21.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:21.650 Found net devices under 0000:86:00.0: cvl_0_0 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:21.650 Found net devices under 0000:86:00.1: cvl_0_1 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.650 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.909 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:21.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:21.909 00:23:21.909 --- 10.0.0.2 ping statistics --- 00:23:21.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.909 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:23:21.910 00:23:21.910 --- 10.0.0.1 ping statistics --- 00:23:21.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.910 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=674681 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 674681 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 674681 ']' 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.910 11:34:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.169 [2024-07-15 11:34:05.534534] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:22.169 [2024-07-15 11:34:05.534579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.169 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.169 [2024-07-15 11:34:05.602902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:22.169 [2024-07-15 11:34:05.681861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.169 [2024-07-15 11:34:05.681896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.169 [2024-07-15 11:34:05.681903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.169 [2024-07-15 11:34:05.681909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.169 [2024-07-15 11:34:05.681917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.169 [2024-07-15 11:34:05.682024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.169 [2024-07-15 11:34:05.682129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.169 [2024-07-15 11:34:05.682130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 [2024-07-15 11:34:06.401797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 Malloc0 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 [2024-07-15 11:34:06.467316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 [2024-07-15 11:34:06.475265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 Malloc1 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:23.107 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=674889 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 674889 /var/tmp/bdevperf.sock 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 674889 ']' 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.108 11:34:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 NVMe0n1 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.045 1 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 request: 00:23:24.045 { 00:23:24.045 "name": "NVMe0", 00:23:24.045 "trtype": "tcp", 00:23:24.045 "traddr": "10.0.0.2", 00:23:24.045 "adrfam": "ipv4", 00:23:24.045 "trsvcid": "4420", 00:23:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.045 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:24.045 "hostaddr": "10.0.0.2", 00:23:24.045 "hostsvcid": "60000", 00:23:24.045 "prchk_reftag": false, 00:23:24.045 "prchk_guard": false, 00:23:24.045 "hdgst": false, 00:23:24.045 "ddgst": false, 00:23:24.045 "method": "bdev_nvme_attach_controller", 00:23:24.045 "req_id": 1 00:23:24.045 } 00:23:24.045 Got JSON-RPC error response 00:23:24.045 response: 00:23:24.045 { 00:23:24.045 "code": -114, 00:23:24.045 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.045 } 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 request: 00:23:24.045 { 00:23:24.045 "name": "NVMe0", 00:23:24.045 "trtype": "tcp", 00:23:24.045 "traddr": "10.0.0.2", 00:23:24.045 "adrfam": "ipv4", 00:23:24.045 "trsvcid": "4420", 00:23:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.045 "hostaddr": "10.0.0.2", 00:23:24.045 "hostsvcid": "60000", 00:23:24.045 "prchk_reftag": false, 00:23:24.045 "prchk_guard": false, 00:23:24.045 "hdgst": false, 00:23:24.045 "ddgst": false, 00:23:24.045 "method": "bdev_nvme_attach_controller", 00:23:24.045 "req_id": 1 00:23:24.045 } 00:23:24.045 Got JSON-RPC error response 00:23:24.045 response: 00:23:24.045 { 00:23:24.045 "code": -114, 00:23:24.045 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.045 } 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.045 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.045 request: 00:23:24.045 { 00:23:24.045 "name": "NVMe0", 00:23:24.045 "trtype": "tcp", 00:23:24.045 "traddr": "10.0.0.2", 00:23:24.045 "adrfam": "ipv4", 00:23:24.045 "trsvcid": "4420", 00:23:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.045 "hostaddr": "10.0.0.2", 00:23:24.045 "hostsvcid": "60000", 00:23:24.045 "prchk_reftag": false, 00:23:24.045 "prchk_guard": false, 00:23:24.045 "hdgst": false, 00:23:24.045 "ddgst": false, 00:23:24.045 "multipath": "disable", 00:23:24.045 "method": "bdev_nvme_attach_controller", 00:23:24.045 "req_id": 1 00:23:24.045 } 00:23:24.045 Got JSON-RPC error response 00:23:24.045 response: 00:23:24.045 { 00:23:24.046 "code": -114, 00:23:24.046 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:24.046 } 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.046 request: 00:23:24.046 { 00:23:24.046 "name": "NVMe0", 00:23:24.046 "trtype": "tcp", 00:23:24.046 "traddr": "10.0.0.2", 00:23:24.046 "adrfam": "ipv4", 00:23:24.046 "trsvcid": "4420", 00:23:24.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.046 "hostaddr": "10.0.0.2", 00:23:24.046 "hostsvcid": "60000", 00:23:24.046 "prchk_reftag": false, 00:23:24.046 "prchk_guard": false, 00:23:24.046 "hdgst": false, 00:23:24.046 "ddgst": false, 00:23:24.046 "multipath": "failover", 00:23:24.046 "method": "bdev_nvme_attach_controller", 00:23:24.046 "req_id": 1 00:23:24.046 } 00:23:24.046 Got JSON-RPC error response 00:23:24.046 response: 00:23:24.046 { 00:23:24.046 "code": -114, 00:23:24.046 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.046 } 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.046 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.305 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.305 00:23:24.305 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:24.565 11:34:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.501 0 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 674889 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 674889 ']' 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 674889 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 674889 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 674889' 00:23:25.501 killing process with pid 674889 00:23:25.501 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 674889 00:23:25.502 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 674889 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:25.761 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:25.761 [2024-07-15 11:34:06.582116] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:25.761 [2024-07-15 11:34:06.582172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674889 ] 00:23:25.761 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.761 [2024-07-15 11:34:06.637267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.761 [2024-07-15 11:34:06.717698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.761 [2024-07-15 11:34:07.894082] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 2b4f3ce2-0008-44d2-8b6e-00ab3d2a5780 already exists 00:23:25.761 [2024-07-15 11:34:07.894112] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:2b4f3ce2-0008-44d2-8b6e-00ab3d2a5780 alias for bdev NVMe1n1 00:23:25.761 [2024-07-15 11:34:07.894120] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:25.761 Running I/O for 1 seconds... 00:23:25.761 00:23:25.761 Latency(us) 00:23:25.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.761 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:25.761 NVMe0n1 : 1.00 24781.91 96.80 0.00 0.00 5158.55 1517.30 9004.08 00:23:25.761 =================================================================================================================== 00:23:25.761 Total : 24781.91 96.80 0.00 0.00 5158.55 1517.30 9004.08 00:23:25.761 Received shutdown signal, test time was about 1.000000 seconds 00:23:25.761 00:23:25.761 Latency(us) 00:23:25.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.761 =================================================================================================================== 00:23:25.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.761 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.761 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.761 rmmod nvme_tcp 00:23:25.761 rmmod nvme_fabrics 00:23:25.761 rmmod nvme_keyring 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 674681 ']' 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 674681 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 674681 ']' 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 674681 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 674681 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 674681' 00:23:26.021 killing process with pid 674681 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 674681 00:23:26.021 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 674681 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.280 11:34:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.195 11:34:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.195 00:23:28.195 real 0m12.074s 00:23:28.195 user 0m16.404s 00:23:28.195 sys 0m5.143s 00:23:28.195 11:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.195 11:34:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.195 ************************************ 00:23:28.195 END TEST nvmf_multicontroller 00:23:28.195 ************************************ 00:23:28.195 11:34:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:28.195 11:34:11 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:28.195 11:34:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:28.195 11:34:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.195 11:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.195 ************************************ 00:23:28.195 START TEST nvmf_aer 00:23:28.195 ************************************ 00:23:28.195 11:34:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:28.501 * Looking for test storage... 00:23:28.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.501 11:34:11 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.502 11:34:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.781 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.781 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.781 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.781 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.781 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:33.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:33.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:33.782 Found net devices under 0000:86:00.0: cvl_0_0 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:33.782 Found net devices under 0000:86:00.1: cvl_0_1 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.782 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:23:34.041 00:23:34.041 --- 10.0.0.2 ping statistics --- 00:23:34.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.041 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:23:34.041 00:23:34.041 --- 10.0.0.1 ping statistics --- 00:23:34.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.041 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=678709 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 678709 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 678709 ']' 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.041 11:34:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.300 [2024-07-15 11:34:17.661594] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:34.300 [2024-07-15 11:34:17.661636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.300 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.300 [2024-07-15 11:34:17.733341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.300 [2024-07-15 11:34:17.813655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.300 [2024-07-15 11:34:17.813690] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.300 [2024-07-15 11:34:17.813698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.300 [2024-07-15 11:34:17.813704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.300 [2024-07-15 11:34:17.813709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.300 [2024-07-15 11:34:17.813764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.300 [2024-07-15 11:34:17.813868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.300 [2024-07-15 11:34:17.813977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.300 [2024-07-15 11:34:17.813978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.237 [2024-07-15 11:34:18.522223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.237 Malloc0 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.237 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.238 [2024-07-15 11:34:18.574117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.238 [ 00:23:35.238 { 00:23:35.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:35.238 "subtype": "Discovery", 00:23:35.238 "listen_addresses": [], 00:23:35.238 "allow_any_host": true, 00:23:35.238 "hosts": [] 00:23:35.238 }, 00:23:35.238 { 00:23:35.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.238 "subtype": "NVMe", 00:23:35.238 "listen_addresses": [ 00:23:35.238 { 00:23:35.238 "trtype": "TCP", 00:23:35.238 "adrfam": "IPv4", 00:23:35.238 "traddr": "10.0.0.2", 00:23:35.238 "trsvcid": "4420" 00:23:35.238 } 00:23:35.238 ], 00:23:35.238 "allow_any_host": true, 00:23:35.238 "hosts": [], 00:23:35.238 "serial_number": "SPDK00000000000001", 00:23:35.238 "model_number": "SPDK bdev Controller", 00:23:35.238 "max_namespaces": 2, 00:23:35.238 "min_cntlid": 1, 00:23:35.238 "max_cntlid": 65519, 00:23:35.238 "namespaces": [ 00:23:35.238 { 00:23:35.238 "nsid": 1, 00:23:35.238 "bdev_name": "Malloc0", 00:23:35.238 "name": "Malloc0", 00:23:35.238 "nguid": "1F40636D81FE41949E1547D48EB171D9", 00:23:35.238 "uuid": "1f40636d-81fe-4194-9e15-47d48eb171d9" 00:23:35.238 } 00:23:35.238 ] 00:23:35.238 } 00:23:35.238 ] 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=678957 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:35.238 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.238 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 Malloc1 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 Asynchronous Event Request test 00:23:35.497 Attaching to 10.0.0.2 00:23:35.497 Attached to 10.0.0.2 00:23:35.497 Registering asynchronous event callbacks... 00:23:35.497 Starting namespace attribute notice tests for all controllers... 00:23:35.497 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:35.497 aer_cb - Changed Namespace 00:23:35.497 Cleaning up... 00:23:35.497 [ 00:23:35.497 { 00:23:35.497 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:35.497 "subtype": "Discovery", 00:23:35.497 "listen_addresses": [], 00:23:35.497 "allow_any_host": true, 00:23:35.497 "hosts": [] 00:23:35.497 }, 00:23:35.497 { 00:23:35.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.497 "subtype": "NVMe", 00:23:35.497 "listen_addresses": [ 00:23:35.497 { 00:23:35.497 "trtype": "TCP", 00:23:35.497 "adrfam": "IPv4", 00:23:35.497 "traddr": "10.0.0.2", 00:23:35.497 "trsvcid": "4420" 00:23:35.497 } 00:23:35.497 ], 00:23:35.497 "allow_any_host": true, 00:23:35.497 "hosts": [], 00:23:35.497 "serial_number": "SPDK00000000000001", 00:23:35.497 "model_number": "SPDK bdev Controller", 00:23:35.497 "max_namespaces": 2, 00:23:35.497 "min_cntlid": 1, 00:23:35.497 "max_cntlid": 65519, 00:23:35.497 "namespaces": [ 00:23:35.497 { 00:23:35.497 "nsid": 1, 00:23:35.497 "bdev_name": "Malloc0", 00:23:35.497 "name": "Malloc0", 00:23:35.497 "nguid": "1F40636D81FE41949E1547D48EB171D9", 00:23:35.497 "uuid": "1f40636d-81fe-4194-9e15-47d48eb171d9" 00:23:35.497 }, 00:23:35.497 { 00:23:35.497 "nsid": 2, 00:23:35.497 "bdev_name": "Malloc1", 00:23:35.497 "name": "Malloc1", 00:23:35.497 "nguid": "10CA7A4B272F410688FED9CDE158A747", 00:23:35.497 "uuid": "10ca7a4b-272f-4106-88fe-d9cde158a747" 00:23:35.497 } 00:23:35.497 ] 00:23:35.497 } 00:23:35.497 ] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 678957 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.497 11:34:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.497 rmmod nvme_tcp 00:23:35.497 rmmod nvme_fabrics 00:23:35.497 rmmod nvme_keyring 00:23:35.497 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.497 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 678709 ']' 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 678709 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 678709 ']' 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 678709 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 678709 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 678709' 00:23:35.498 killing process with pid 678709 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 678709 00:23:35.498 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 678709 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.757 11:34:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.290 11:34:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.290 00:23:38.290 real 0m9.541s 00:23:38.290 user 0m7.372s 00:23:38.290 sys 0m4.726s 00:23:38.290 11:34:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.290 11:34:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.290 ************************************ 00:23:38.290 END TEST nvmf_aer 00:23:38.290 ************************************ 00:23:38.290 11:34:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:38.290 11:34:21 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.290 11:34:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.290 11:34:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.290 11:34:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.291 ************************************ 00:23:38.291 START TEST nvmf_async_init 00:23:38.291 ************************************ 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.291 * Looking for test storage... 00:23:38.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=50f6413d9ddd47faa57db4df8a0a952d 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.291 11:34:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.564 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:43.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:43.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:43.565 Found net devices under 0000:86:00.0: cvl_0_0 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:43.565 Found net devices under 0000:86:00.1: cvl_0_1 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.565 11:34:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.565 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.565 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.565 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.565 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:23:43.825 00:23:43.825 --- 10.0.0.2 ping statistics --- 00:23:43.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.825 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:23:43.825 00:23:43.825 --- 10.0.0.1 ping statistics --- 00:23:43.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.825 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=682474 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 682474 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 682474 ']' 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.825 11:34:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.825 [2024-07-15 11:34:27.317506] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:43.825 [2024-07-15 11:34:27.317548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.825 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.825 [2024-07-15 11:34:27.389124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.082 [2024-07-15 11:34:27.467162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.082 [2024-07-15 11:34:27.467202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.082 [2024-07-15 11:34:27.467209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.082 [2024-07-15 11:34:27.467215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.082 [2024-07-15 11:34:27.467219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.082 [2024-07-15 11:34:27.467241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 [2024-07-15 11:34:28.173366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 null0 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 50f6413d9ddd47faa57db4df8a0a952d 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 [2024-07-15 11:34:28.221604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.909 nvme0n1 00:23:44.909 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.909 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:44.909 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.909 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.909 [ 00:23:44.909 { 00:23:44.909 "name": "nvme0n1", 00:23:44.909 "aliases": [ 00:23:44.909 "50f6413d-9ddd-47fa-a57d-b4df8a0a952d" 00:23:44.909 ], 00:23:44.909 "product_name": "NVMe disk", 00:23:44.909 "block_size": 512, 00:23:44.909 "num_blocks": 2097152, 00:23:44.909 "uuid": "50f6413d-9ddd-47fa-a57d-b4df8a0a952d", 00:23:44.909 "assigned_rate_limits": { 00:23:44.909 "rw_ios_per_sec": 0, 00:23:44.909 "rw_mbytes_per_sec": 0, 00:23:44.909 "r_mbytes_per_sec": 0, 00:23:44.909 "w_mbytes_per_sec": 0 00:23:44.909 }, 00:23:44.909 "claimed": false, 00:23:44.909 "zoned": false, 00:23:44.909 "supported_io_types": { 00:23:44.909 "read": true, 00:23:44.909 "write": true, 00:23:44.909 "unmap": false, 00:23:44.909 "flush": true, 00:23:44.909 "reset": true, 00:23:44.909 "nvme_admin": true, 00:23:44.909 "nvme_io": true, 00:23:44.909 "nvme_io_md": false, 00:23:44.909 "write_zeroes": true, 00:23:44.909 "zcopy": false, 00:23:44.909 "get_zone_info": false, 00:23:44.909 "zone_management": false, 00:23:44.909 "zone_append": false, 00:23:44.909 "compare": true, 00:23:44.909 "compare_and_write": true, 00:23:44.909 "abort": true, 00:23:44.909 "seek_hole": false, 00:23:44.909 "seek_data": false, 00:23:44.909 "copy": true, 00:23:44.909 "nvme_iov_md": false 00:23:44.909 }, 00:23:44.909 "memory_domains": [ 00:23:44.909 { 00:23:44.909 "dma_device_id": "system", 00:23:44.909 "dma_device_type": 1 00:23:44.909 } 00:23:44.909 ], 00:23:44.909 "driver_specific": { 00:23:44.909 "nvme": [ 00:23:44.909 { 00:23:44.909 "trid": { 00:23:44.909 "trtype": "TCP", 00:23:44.909 "adrfam": "IPv4", 00:23:44.909 "traddr": "10.0.0.2", 00:23:44.909 "trsvcid": "4420", 00:23:44.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:44.910 }, 00:23:44.910 "ctrlr_data": { 00:23:44.910 "cntlid": 1, 00:23:44.910 "vendor_id": "0x8086", 00:23:44.910 "model_number": "SPDK bdev Controller", 00:23:44.910 "serial_number": "00000000000000000000", 00:23:44.910 "firmware_revision": "24.09", 00:23:44.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.910 "oacs": { 00:23:44.910 "security": 0, 00:23:44.910 "format": 0, 00:23:44.910 "firmware": 0, 00:23:44.910 "ns_manage": 0 00:23:44.910 }, 00:23:44.910 "multi_ctrlr": true, 00:23:44.910 "ana_reporting": false 00:23:44.910 }, 00:23:44.910 "vs": { 00:23:44.910 "nvme_version": "1.3" 00:23:44.910 }, 00:23:44.910 "ns_data": { 00:23:44.910 "id": 1, 00:23:44.910 "can_share": true 00:23:44.910 } 00:23:44.910 } 00:23:44.910 ], 00:23:44.910 "mp_policy": "active_passive" 00:23:44.910 } 00:23:44.910 } 00:23:44.910 ] 00:23:44.910 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.910 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:44.910 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.910 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.910 [2024-07-15 11:34:28.482238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:44.910 [2024-07-15 11:34:28.482293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2251250 (9): Bad file descriptor 00:23:45.170 [2024-07-15 11:34:28.614301] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 [ 00:23:45.170 { 00:23:45.170 "name": "nvme0n1", 00:23:45.170 "aliases": [ 00:23:45.170 "50f6413d-9ddd-47fa-a57d-b4df8a0a952d" 00:23:45.170 ], 00:23:45.170 "product_name": "NVMe disk", 00:23:45.170 "block_size": 512, 00:23:45.170 "num_blocks": 2097152, 00:23:45.170 "uuid": "50f6413d-9ddd-47fa-a57d-b4df8a0a952d", 00:23:45.170 "assigned_rate_limits": { 00:23:45.170 "rw_ios_per_sec": 0, 00:23:45.170 "rw_mbytes_per_sec": 0, 00:23:45.170 "r_mbytes_per_sec": 0, 00:23:45.170 "w_mbytes_per_sec": 0 00:23:45.170 }, 00:23:45.170 "claimed": false, 00:23:45.170 "zoned": false, 00:23:45.170 "supported_io_types": { 00:23:45.170 "read": true, 00:23:45.170 "write": true, 00:23:45.170 "unmap": false, 00:23:45.170 "flush": true, 00:23:45.170 "reset": true, 00:23:45.170 "nvme_admin": true, 00:23:45.170 "nvme_io": true, 00:23:45.170 "nvme_io_md": false, 00:23:45.170 "write_zeroes": true, 00:23:45.170 "zcopy": false, 00:23:45.170 "get_zone_info": false, 00:23:45.170 "zone_management": false, 00:23:45.170 "zone_append": false, 00:23:45.170 "compare": true, 00:23:45.170 "compare_and_write": true, 00:23:45.170 "abort": true, 00:23:45.170 "seek_hole": false, 00:23:45.170 "seek_data": false, 00:23:45.170 "copy": true, 00:23:45.170 "nvme_iov_md": false 00:23:45.170 }, 00:23:45.170 "memory_domains": [ 00:23:45.170 { 00:23:45.170 "dma_device_id": "system", 00:23:45.170 "dma_device_type": 1 00:23:45.170 } 00:23:45.170 ], 00:23:45.170 "driver_specific": { 00:23:45.170 "nvme": [ 00:23:45.170 { 00:23:45.170 "trid": { 00:23:45.170 "trtype": "TCP", 00:23:45.170 "adrfam": "IPv4", 00:23:45.170 "traddr": "10.0.0.2", 00:23:45.170 "trsvcid": "4420", 00:23:45.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.170 }, 00:23:45.170 "ctrlr_data": { 00:23:45.170 "cntlid": 2, 00:23:45.170 "vendor_id": "0x8086", 00:23:45.170 "model_number": "SPDK bdev Controller", 00:23:45.170 "serial_number": "00000000000000000000", 00:23:45.170 "firmware_revision": "24.09", 00:23:45.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.170 "oacs": { 00:23:45.170 "security": 0, 00:23:45.170 "format": 0, 00:23:45.170 "firmware": 0, 00:23:45.170 "ns_manage": 0 00:23:45.170 }, 00:23:45.170 "multi_ctrlr": true, 00:23:45.170 "ana_reporting": false 00:23:45.170 }, 00:23:45.170 "vs": { 00:23:45.170 "nvme_version": "1.3" 00:23:45.170 }, 00:23:45.170 "ns_data": { 00:23:45.170 "id": 1, 00:23:45.170 "can_share": true 00:23:45.170 } 00:23:45.170 } 00:23:45.170 ], 00:23:45.170 "mp_policy": "active_passive" 00:23:45.170 } 00:23:45.170 } 00:23:45.170 ] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KKMhuMKo2e 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KKMhuMKo2e 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 [2024-07-15 11:34:28.674837] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.170 [2024-07-15 11:34:28.674942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KKMhuMKo2e 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 [2024-07-15 11:34:28.682855] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KKMhuMKo2e 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.170 [2024-07-15 11:34:28.690886] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.170 [2024-07-15 11:34:28.690920] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:45.170 nvme0n1 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.170 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.431 [ 00:23:45.431 { 00:23:45.431 "name": "nvme0n1", 00:23:45.431 "aliases": [ 00:23:45.431 "50f6413d-9ddd-47fa-a57d-b4df8a0a952d" 00:23:45.431 ], 00:23:45.431 "product_name": "NVMe disk", 00:23:45.431 "block_size": 512, 00:23:45.431 "num_blocks": 2097152, 00:23:45.431 "uuid": "50f6413d-9ddd-47fa-a57d-b4df8a0a952d", 00:23:45.431 "assigned_rate_limits": { 00:23:45.431 "rw_ios_per_sec": 0, 00:23:45.431 "rw_mbytes_per_sec": 0, 00:23:45.431 "r_mbytes_per_sec": 0, 00:23:45.431 "w_mbytes_per_sec": 0 00:23:45.431 }, 00:23:45.431 "claimed": false, 00:23:45.431 "zoned": false, 00:23:45.431 "supported_io_types": { 00:23:45.431 "read": true, 00:23:45.431 "write": true, 00:23:45.431 "unmap": false, 00:23:45.431 "flush": true, 00:23:45.431 "reset": true, 00:23:45.431 "nvme_admin": true, 00:23:45.431 "nvme_io": true, 00:23:45.431 "nvme_io_md": false, 00:23:45.431 "write_zeroes": true, 00:23:45.431 "zcopy": false, 00:23:45.431 "get_zone_info": false, 00:23:45.431 "zone_management": false, 00:23:45.431 "zone_append": false, 00:23:45.431 "compare": true, 00:23:45.431 "compare_and_write": true, 00:23:45.431 "abort": true, 00:23:45.431 "seek_hole": false, 00:23:45.431 "seek_data": false, 00:23:45.431 "copy": true, 00:23:45.431 "nvme_iov_md": false 00:23:45.431 }, 00:23:45.431 "memory_domains": [ 00:23:45.431 { 00:23:45.431 "dma_device_id": "system", 00:23:45.431 "dma_device_type": 1 00:23:45.431 } 00:23:45.431 ], 00:23:45.431 "driver_specific": { 00:23:45.431 "nvme": [ 00:23:45.431 { 00:23:45.431 "trid": { 00:23:45.431 "trtype": "TCP", 00:23:45.431 "adrfam": "IPv4", 00:23:45.431 "traddr": "10.0.0.2", 00:23:45.431 "trsvcid": "4421", 00:23:45.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.431 }, 00:23:45.431 "ctrlr_data": { 00:23:45.431 "cntlid": 3, 00:23:45.431 "vendor_id": "0x8086", 00:23:45.431 "model_number": "SPDK bdev Controller", 00:23:45.431 "serial_number": "00000000000000000000", 00:23:45.431 "firmware_revision": "24.09", 00:23:45.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.431 "oacs": { 00:23:45.431 "security": 0, 00:23:45.431 "format": 0, 00:23:45.431 "firmware": 0, 00:23:45.431 "ns_manage": 0 00:23:45.431 }, 00:23:45.431 "multi_ctrlr": true, 00:23:45.431 "ana_reporting": false 00:23:45.431 }, 00:23:45.431 "vs": { 00:23:45.431 "nvme_version": "1.3" 00:23:45.431 }, 00:23:45.431 "ns_data": { 00:23:45.431 "id": 1, 00:23:45.431 "can_share": true 00:23:45.431 } 00:23:45.431 } 00:23:45.431 ], 00:23:45.431 "mp_policy": "active_passive" 00:23:45.431 } 00:23:45.431 } 00:23:45.431 ] 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KKMhuMKo2e 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.431 rmmod nvme_tcp 00:23:45.431 rmmod nvme_fabrics 00:23:45.431 rmmod nvme_keyring 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 682474 ']' 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 682474 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 682474 ']' 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 682474 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 682474 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 682474' 00:23:45.431 killing process with pid 682474 00:23:45.431 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 682474 00:23:45.431 [2024-07-15 11:34:28.903836] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:45.431 [2024-07-15 11:34:28.903861] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.432 11:34:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 682474 00:23:45.691 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.691 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.691 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.691 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.691 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.692 11:34:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.692 11:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.692 11:34:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.600 11:34:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.600 00:23:47.600 real 0m9.746s 00:23:47.600 user 0m3.602s 00:23:47.600 sys 0m4.699s 00:23:47.600 11:34:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.600 11:34:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.600 ************************************ 00:23:47.600 END TEST nvmf_async_init 00:23:47.600 ************************************ 00:23:47.600 11:34:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.600 11:34:31 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.600 11:34:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.600 11:34:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.600 11:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 ************************************ 00:23:47.859 START TEST dma 00:23:47.859 ************************************ 00:23:47.859 11:34:31 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.859 * Looking for test storage... 00:23:47.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.859 11:34:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.859 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.859 11:34:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.859 11:34:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.859 11:34:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.860 11:34:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 11:34:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 11:34:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 11:34:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:47.860 11:34:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.860 11:34:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.860 11:34:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:47.860 11:34:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:47.860 00:23:47.860 real 0m0.115s 00:23:47.860 user 0m0.059s 00:23:47.860 sys 0m0.064s 00:23:47.860 11:34:31 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.860 11:34:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:47.860 ************************************ 00:23:47.860 END TEST dma 00:23:47.860 ************************************ 00:23:47.860 11:34:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.860 11:34:31 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.860 11:34:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.860 11:34:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.860 11:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.860 ************************************ 00:23:47.860 START TEST nvmf_identify 00:23:47.860 ************************************ 00:23:47.860 11:34:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:48.120 * Looking for test storage... 00:23:48.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.120 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.121 11:34:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.398 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.399 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.399 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.399 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.658 11:34:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.658 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.658 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.658 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.658 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:53.659 00:23:53.659 --- 10.0.0.2 ping statistics --- 00:23:53.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.659 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:23:53.659 00:23:53.659 --- 10.0.0.1 ping statistics --- 00:23:53.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.659 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.659 11:34:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=686285 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 686285 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 686285 ']' 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.918 11:34:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.918 [2024-07-15 11:34:37.309632] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:53.918 [2024-07-15 11:34:37.309674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.918 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.918 [2024-07-15 11:34:37.380667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.918 [2024-07-15 11:34:37.461301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.918 [2024-07-15 11:34:37.461336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.918 [2024-07-15 11:34:37.461343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.918 [2024-07-15 11:34:37.461348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.918 [2024-07-15 11:34:37.461353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.918 [2024-07-15 11:34:37.461464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.918 [2024-07-15 11:34:37.461571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.918 [2024-07-15 11:34:37.461680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.918 [2024-07-15 11:34:37.461681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 [2024-07-15 11:34:38.114067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 Malloc0 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.858 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.859 [2024-07-15 11:34:38.202187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.859 [ 00:23:54.859 { 00:23:54.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:54.859 "subtype": "Discovery", 00:23:54.859 "listen_addresses": [ 00:23:54.859 { 00:23:54.859 "trtype": "TCP", 00:23:54.859 "adrfam": "IPv4", 00:23:54.859 "traddr": "10.0.0.2", 00:23:54.859 "trsvcid": "4420" 00:23:54.859 } 00:23:54.859 ], 00:23:54.859 "allow_any_host": true, 00:23:54.859 "hosts": [] 00:23:54.859 }, 00:23:54.859 { 00:23:54.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.859 "subtype": "NVMe", 00:23:54.859 "listen_addresses": [ 00:23:54.859 { 00:23:54.859 "trtype": "TCP", 00:23:54.859 "adrfam": "IPv4", 00:23:54.859 "traddr": "10.0.0.2", 00:23:54.859 "trsvcid": "4420" 00:23:54.859 } 00:23:54.859 ], 00:23:54.859 "allow_any_host": true, 00:23:54.859 "hosts": [], 00:23:54.859 "serial_number": "SPDK00000000000001", 00:23:54.859 "model_number": "SPDK bdev Controller", 00:23:54.859 "max_namespaces": 32, 00:23:54.859 "min_cntlid": 1, 00:23:54.859 "max_cntlid": 65519, 00:23:54.859 "namespaces": [ 00:23:54.859 { 00:23:54.859 "nsid": 1, 00:23:54.859 "bdev_name": "Malloc0", 00:23:54.859 "name": "Malloc0", 00:23:54.859 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:54.859 "eui64": "ABCDEF0123456789", 00:23:54.859 "uuid": "351f6854-475f-490b-a517-31be250b1fca" 00:23:54.859 } 00:23:54.859 ] 00:23:54.859 } 00:23:54.859 ] 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.859 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:54.859 [2024-07-15 11:34:38.254263] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:54.859 [2024-07-15 11:34:38.254297] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686504 ] 00:23:54.859 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.859 [2024-07-15 11:34:38.282779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:54.859 [2024-07-15 11:34:38.282828] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:54.859 [2024-07-15 11:34:38.282833] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:54.859 [2024-07-15 11:34:38.282844] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:54.859 [2024-07-15 11:34:38.282850] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:54.859 [2024-07-15 11:34:38.283217] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:54.859 [2024-07-15 11:34:38.283250] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x139bec0 0 00:23:54.859 [2024-07-15 11:34:38.294234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:54.859 [2024-07-15 11:34:38.294244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:54.859 [2024-07-15 11:34:38.294248] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:54.859 [2024-07-15 11:34:38.294252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:54.859 [2024-07-15 11:34:38.294286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.294291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.294295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.859 [2024-07-15 11:34:38.294307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:54.859 [2024-07-15 11:34:38.294322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.859 [2024-07-15 11:34:38.304234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.859 [2024-07-15 11:34:38.304244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.859 [2024-07-15 11:34:38.304247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.859 [2024-07-15 11:34:38.304265] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:54.859 [2024-07-15 11:34:38.304272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:54.859 [2024-07-15 11:34:38.304276] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:54.859 [2024-07-15 11:34:38.304290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.859 [2024-07-15 11:34:38.304305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.859 [2024-07-15 11:34:38.304318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.859 [2024-07-15 11:34:38.304446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.859 [2024-07-15 11:34:38.304452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.859 [2024-07-15 11:34:38.304455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.859 [2024-07-15 11:34:38.304463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:54.859 [2024-07-15 11:34:38.304470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:54.859 [2024-07-15 11:34:38.304480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.859 [2024-07-15 11:34:38.304493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.859 [2024-07-15 11:34:38.304503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.859 [2024-07-15 11:34:38.304571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.859 [2024-07-15 11:34:38.304577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.859 [2024-07-15 11:34:38.304580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.859 [2024-07-15 11:34:38.304588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:54.859 [2024-07-15 11:34:38.304595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:54.859 [2024-07-15 11:34:38.304601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.859 [2024-07-15 11:34:38.304613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.859 [2024-07-15 11:34:38.304622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.859 [2024-07-15 11:34:38.304730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.859 [2024-07-15 11:34:38.304736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.859 [2024-07-15 11:34:38.304739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.859 [2024-07-15 11:34:38.304746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:54.859 [2024-07-15 11:34:38.304754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.859 [2024-07-15 11:34:38.304767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.859 [2024-07-15 11:34:38.304776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.859 [2024-07-15 11:34:38.304881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.859 [2024-07-15 11:34:38.304887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.859 [2024-07-15 11:34:38.304890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.859 [2024-07-15 11:34:38.304894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.860 [2024-07-15 11:34:38.304898] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:54.860 [2024-07-15 11:34:38.304902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:54.860 [2024-07-15 11:34:38.304909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:54.860 [2024-07-15 11:34:38.305016] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:54.860 [2024-07-15 11:34:38.305020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:54.860 [2024-07-15 11:34:38.305027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.860 [2024-07-15 11:34:38.305048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.860 [2024-07-15 11:34:38.305168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.860 [2024-07-15 11:34:38.305174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.860 [2024-07-15 11:34:38.305177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.860 [2024-07-15 11:34:38.305184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:54.860 [2024-07-15 11:34:38.305192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.860 [2024-07-15 11:34:38.305213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.860 [2024-07-15 11:34:38.305287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.860 [2024-07-15 11:34:38.305293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.860 [2024-07-15 11:34:38.305296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.860 [2024-07-15 11:34:38.305304] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:54.860 [2024-07-15 11:34:38.305308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:54.860 [2024-07-15 11:34:38.305323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.860 [2024-07-15 11:34:38.305350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.860 [2024-07-15 11:34:38.305478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.860 [2024-07-15 11:34:38.305483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.860 [2024-07-15 11:34:38.305487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305490] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139bec0): datao=0, datal=4096, cccid=0 00:23:54.860 [2024-07-15 11:34:38.305496] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x141ee40) on tqpair(0x139bec0): expected_datao=0, payload_size=4096 00:23:54.860 [2024-07-15 11:34:38.305500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305507] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.860 [2024-07-15 11:34:38.305553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.860 [2024-07-15 11:34:38.305556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.860 [2024-07-15 11:34:38.305565] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:54.860 [2024-07-15 11:34:38.305572] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:54.860 [2024-07-15 11:34:38.305576] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:54.860 [2024-07-15 11:34:38.305580] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:54.860 [2024-07-15 11:34:38.305584] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:54.860 [2024-07-15 11:34:38.305588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.860 [2024-07-15 11:34:38.305625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.860 [2024-07-15 11:34:38.305698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.860 [2024-07-15 11:34:38.305704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.860 [2024-07-15 11:34:38.305707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.860 [2024-07-15 11:34:38.305716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.860 [2024-07-15 11:34:38.305733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.860 [2024-07-15 11:34:38.305749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.860 [2024-07-15 11:34:38.305767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.860 [2024-07-15 11:34:38.305782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:54.860 [2024-07-15 11:34:38.305798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139bec0) 00:23:54.860 [2024-07-15 11:34:38.305806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.860 [2024-07-15 11:34:38.305817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141ee40, cid 0, qid 0 00:23:54.860 [2024-07-15 11:34:38.305821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141efc0, cid 1, qid 0 00:23:54.860 [2024-07-15 11:34:38.305825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f140, cid 2, qid 0 00:23:54.860 [2024-07-15 11:34:38.305829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.860 [2024-07-15 11:34:38.305834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f440, cid 4, qid 0 00:23:54.860 [2024-07-15 11:34:38.305937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.860 [2024-07-15 11:34:38.305943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.860 [2024-07-15 11:34:38.305946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.860 [2024-07-15 11:34:38.305949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f440) on tqpair=0x139bec0 00:23:54.861 [2024-07-15 11:34:38.305954] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:54.861 [2024-07-15 11:34:38.305958] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:54.861 [2024-07-15 11:34:38.305967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.305970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139bec0) 00:23:54.861 [2024-07-15 11:34:38.305976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.861 [2024-07-15 11:34:38.305985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f440, cid 4, qid 0 00:23:54.861 [2024-07-15 11:34:38.306065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.861 [2024-07-15 11:34:38.306071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.861 [2024-07-15 11:34:38.306074] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.306077] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139bec0): datao=0, datal=4096, cccid=4 00:23:54.861 [2024-07-15 11:34:38.306081] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x141f440) on tqpair(0x139bec0): expected_datao=0, payload_size=4096 00:23:54.861 [2024-07-15 11:34:38.306084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.306097] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.306100] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.861 [2024-07-15 11:34:38.351244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.861 [2024-07-15 11:34:38.351248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f440) on tqpair=0x139bec0 00:23:54.861 [2024-07-15 11:34:38.351265] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:54.861 [2024-07-15 11:34:38.351288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139bec0) 00:23:54.861 [2024-07-15 11:34:38.351300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.861 [2024-07-15 11:34:38.351306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139bec0) 00:23:54.861 [2024-07-15 11:34:38.351318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.861 [2024-07-15 11:34:38.351334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f440, cid 4, qid 0 00:23:54.861 [2024-07-15 11:34:38.351339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f5c0, cid 5, qid 0 00:23:54.861 [2024-07-15 11:34:38.351530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.861 [2024-07-15 11:34:38.351536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.861 [2024-07-15 11:34:38.351539] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351543] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139bec0): datao=0, datal=1024, cccid=4 00:23:54.861 [2024-07-15 11:34:38.351547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x141f440) on tqpair(0x139bec0): expected_datao=0, payload_size=1024 00:23:54.861 [2024-07-15 11:34:38.351550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351560] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.861 [2024-07-15 11:34:38.351569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.861 [2024-07-15 11:34:38.351572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.351576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f5c0) on tqpair=0x139bec0 00:23:54.861 [2024-07-15 11:34:38.393372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.861 [2024-07-15 11:34:38.393383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.861 [2024-07-15 11:34:38.393386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f440) on tqpair=0x139bec0 00:23:54.861 [2024-07-15 11:34:38.393405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139bec0) 00:23:54.861 [2024-07-15 11:34:38.393415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.861 [2024-07-15 11:34:38.393431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f440, cid 4, qid 0 00:23:54.861 [2024-07-15 11:34:38.393518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.861 [2024-07-15 11:34:38.393524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.861 [2024-07-15 11:34:38.393527] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393531] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139bec0): datao=0, datal=3072, cccid=4 00:23:54.861 [2024-07-15 11:34:38.393534] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x141f440) on tqpair(0x139bec0): expected_datao=0, payload_size=3072 00:23:54.861 [2024-07-15 11:34:38.393538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393547] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.861 [2024-07-15 11:34:38.393595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.861 [2024-07-15 11:34:38.393598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f440) on tqpair=0x139bec0 00:23:54.861 [2024-07-15 11:34:38.393609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139bec0) 00:23:54.861 [2024-07-15 11:34:38.393618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.861 [2024-07-15 11:34:38.393633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f440, cid 4, qid 0 00:23:54.861 [2024-07-15 11:34:38.393708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.861 [2024-07-15 11:34:38.393713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.861 [2024-07-15 11:34:38.393716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139bec0): datao=0, datal=8, cccid=4 00:23:54.861 [2024-07-15 11:34:38.393723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x141f440) on tqpair(0x139bec0): expected_datao=0, payload_size=8 00:23:54.861 [2024-07-15 11:34:38.393727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.393736] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.435348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.861 [2024-07-15 11:34:38.435362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.861 [2024-07-15 11:34:38.435366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.861 [2024-07-15 11:34:38.435369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f440) on tqpair=0x139bec0 00:23:54.861 ===================================================== 00:23:54.861 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:54.861 ===================================================== 00:23:54.861 Controller Capabilities/Features 00:23:54.861 ================================ 00:23:54.861 Vendor ID: 0000 00:23:54.861 Subsystem Vendor ID: 0000 00:23:54.861 Serial Number: .................... 00:23:54.861 Model Number: ........................................ 00:23:54.861 Firmware Version: 24.09 00:23:54.861 Recommended Arb Burst: 0 00:23:54.861 IEEE OUI Identifier: 00 00 00 00:23:54.861 Multi-path I/O 00:23:54.861 May have multiple subsystem ports: No 00:23:54.861 May have multiple controllers: No 00:23:54.861 Associated with SR-IOV VF: No 00:23:54.861 Max Data Transfer Size: 131072 00:23:54.861 Max Number of Namespaces: 0 00:23:54.861 Max Number of I/O Queues: 1024 00:23:54.861 NVMe Specification Version (VS): 1.3 00:23:54.861 NVMe Specification Version (Identify): 1.3 00:23:54.861 Maximum Queue Entries: 128 00:23:54.861 Contiguous Queues Required: Yes 00:23:54.861 Arbitration Mechanisms Supported 00:23:54.861 Weighted Round Robin: Not Supported 00:23:54.861 Vendor Specific: Not Supported 00:23:54.861 Reset Timeout: 15000 ms 00:23:54.861 Doorbell Stride: 4 bytes 00:23:54.861 NVM Subsystem Reset: Not Supported 00:23:54.861 Command Sets Supported 00:23:54.861 NVM Command Set: Supported 00:23:54.861 Boot Partition: Not Supported 00:23:54.861 Memory Page Size Minimum: 4096 bytes 00:23:54.861 Memory Page Size Maximum: 4096 bytes 00:23:54.861 Persistent Memory Region: Not Supported 00:23:54.861 Optional Asynchronous Events Supported 00:23:54.861 Namespace Attribute Notices: Not Supported 00:23:54.861 Firmware Activation Notices: Not Supported 00:23:54.861 ANA Change Notices: Not Supported 00:23:54.861 PLE Aggregate Log Change Notices: Not Supported 00:23:54.861 LBA Status Info Alert Notices: Not Supported 00:23:54.861 EGE Aggregate Log Change Notices: Not Supported 00:23:54.861 Normal NVM Subsystem Shutdown event: Not Supported 00:23:54.862 Zone Descriptor Change Notices: Not Supported 00:23:54.862 Discovery Log Change Notices: Supported 00:23:54.862 Controller Attributes 00:23:54.862 128-bit Host Identifier: Not Supported 00:23:54.862 Non-Operational Permissive Mode: Not Supported 00:23:54.862 NVM Sets: Not Supported 00:23:54.862 Read Recovery Levels: Not Supported 00:23:54.862 Endurance Groups: Not Supported 00:23:54.862 Predictable Latency Mode: Not Supported 00:23:54.862 Traffic Based Keep ALive: Not Supported 00:23:54.862 Namespace Granularity: Not Supported 00:23:54.862 SQ Associations: Not Supported 00:23:54.862 UUID List: Not Supported 00:23:54.862 Multi-Domain Subsystem: Not Supported 00:23:54.862 Fixed Capacity Management: Not Supported 00:23:54.862 Variable Capacity Management: Not Supported 00:23:54.862 Delete Endurance Group: Not Supported 00:23:54.862 Delete NVM Set: Not Supported 00:23:54.862 Extended LBA Formats Supported: Not Supported 00:23:54.862 Flexible Data Placement Supported: Not Supported 00:23:54.862 00:23:54.862 Controller Memory Buffer Support 00:23:54.862 ================================ 00:23:54.862 Supported: No 00:23:54.862 00:23:54.862 Persistent Memory Region Support 00:23:54.862 ================================ 00:23:54.862 Supported: No 00:23:54.862 00:23:54.862 Admin Command Set Attributes 00:23:54.862 ============================ 00:23:54.862 Security Send/Receive: Not Supported 00:23:54.862 Format NVM: Not Supported 00:23:54.862 Firmware Activate/Download: Not Supported 00:23:54.862 Namespace Management: Not Supported 00:23:54.862 Device Self-Test: Not Supported 00:23:54.862 Directives: Not Supported 00:23:54.862 NVMe-MI: Not Supported 00:23:54.862 Virtualization Management: Not Supported 00:23:54.862 Doorbell Buffer Config: Not Supported 00:23:54.862 Get LBA Status Capability: Not Supported 00:23:54.862 Command & Feature Lockdown Capability: Not Supported 00:23:54.862 Abort Command Limit: 1 00:23:54.862 Async Event Request Limit: 4 00:23:54.862 Number of Firmware Slots: N/A 00:23:54.862 Firmware Slot 1 Read-Only: N/A 00:23:54.862 Firmware Activation Without Reset: N/A 00:23:54.862 Multiple Update Detection Support: N/A 00:23:54.862 Firmware Update Granularity: No Information Provided 00:23:54.862 Per-Namespace SMART Log: No 00:23:54.862 Asymmetric Namespace Access Log Page: Not Supported 00:23:54.862 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:54.862 Command Effects Log Page: Not Supported 00:23:54.862 Get Log Page Extended Data: Supported 00:23:54.862 Telemetry Log Pages: Not Supported 00:23:54.862 Persistent Event Log Pages: Not Supported 00:23:54.862 Supported Log Pages Log Page: May Support 00:23:54.862 Commands Supported & Effects Log Page: Not Supported 00:23:54.862 Feature Identifiers & Effects Log Page:May Support 00:23:54.862 NVMe-MI Commands & Effects Log Page: May Support 00:23:54.862 Data Area 4 for Telemetry Log: Not Supported 00:23:54.862 Error Log Page Entries Supported: 128 00:23:54.862 Keep Alive: Not Supported 00:23:54.862 00:23:54.862 NVM Command Set Attributes 00:23:54.862 ========================== 00:23:54.862 Submission Queue Entry Size 00:23:54.862 Max: 1 00:23:54.862 Min: 1 00:23:54.862 Completion Queue Entry Size 00:23:54.862 Max: 1 00:23:54.862 Min: 1 00:23:54.862 Number of Namespaces: 0 00:23:54.862 Compare Command: Not Supported 00:23:54.862 Write Uncorrectable Command: Not Supported 00:23:54.862 Dataset Management Command: Not Supported 00:23:54.862 Write Zeroes Command: Not Supported 00:23:54.862 Set Features Save Field: Not Supported 00:23:54.862 Reservations: Not Supported 00:23:54.862 Timestamp: Not Supported 00:23:54.862 Copy: Not Supported 00:23:54.862 Volatile Write Cache: Not Present 00:23:54.862 Atomic Write Unit (Normal): 1 00:23:54.862 Atomic Write Unit (PFail): 1 00:23:54.862 Atomic Compare & Write Unit: 1 00:23:54.862 Fused Compare & Write: Supported 00:23:54.862 Scatter-Gather List 00:23:54.862 SGL Command Set: Supported 00:23:54.862 SGL Keyed: Supported 00:23:54.862 SGL Bit Bucket Descriptor: Not Supported 00:23:54.862 SGL Metadata Pointer: Not Supported 00:23:54.862 Oversized SGL: Not Supported 00:23:54.862 SGL Metadata Address: Not Supported 00:23:54.862 SGL Offset: Supported 00:23:54.862 Transport SGL Data Block: Not Supported 00:23:54.862 Replay Protected Memory Block: Not Supported 00:23:54.862 00:23:54.862 Firmware Slot Information 00:23:54.862 ========================= 00:23:54.862 Active slot: 0 00:23:54.862 00:23:54.862 00:23:54.862 Error Log 00:23:54.862 ========= 00:23:54.862 00:23:54.862 Active Namespaces 00:23:54.862 ================= 00:23:54.862 Discovery Log Page 00:23:54.862 ================== 00:23:54.862 Generation Counter: 2 00:23:54.862 Number of Records: 2 00:23:54.862 Record Format: 0 00:23:54.862 00:23:54.862 Discovery Log Entry 0 00:23:54.862 ---------------------- 00:23:54.862 Transport Type: 3 (TCP) 00:23:54.862 Address Family: 1 (IPv4) 00:23:54.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:54.862 Entry Flags: 00:23:54.862 Duplicate Returned Information: 1 00:23:54.862 Explicit Persistent Connection Support for Discovery: 1 00:23:54.862 Transport Requirements: 00:23:54.862 Secure Channel: Not Required 00:23:54.862 Port ID: 0 (0x0000) 00:23:54.862 Controller ID: 65535 (0xffff) 00:23:54.862 Admin Max SQ Size: 128 00:23:54.862 Transport Service Identifier: 4420 00:23:54.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:54.862 Transport Address: 10.0.0.2 00:23:54.862 Discovery Log Entry 1 00:23:54.862 ---------------------- 00:23:54.862 Transport Type: 3 (TCP) 00:23:54.862 Address Family: 1 (IPv4) 00:23:54.862 Subsystem Type: 2 (NVM Subsystem) 00:23:54.862 Entry Flags: 00:23:54.862 Duplicate Returned Information: 0 00:23:54.862 Explicit Persistent Connection Support for Discovery: 0 00:23:54.862 Transport Requirements: 00:23:54.862 Secure Channel: Not Required 00:23:54.862 Port ID: 0 (0x0000) 00:23:54.862 Controller ID: 65535 (0xffff) 00:23:54.862 Admin Max SQ Size: 128 00:23:54.862 Transport Service Identifier: 4420 00:23:54.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:54.862 Transport Address: 10.0.0.2 [2024-07-15 11:34:38.435448] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:54.862 [2024-07-15 11:34:38.435459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141ee40) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.863 [2024-07-15 11:34:38.435470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141efc0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.863 [2024-07-15 11:34:38.435478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f140) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.863 [2024-07-15 11:34:38.435486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.863 [2024-07-15 11:34:38.435502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.435515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.435528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.435594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.435600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.435603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.435625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.435637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.435713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.435719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.435722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435729] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:54.863 [2024-07-15 11:34:38.435733] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:54.863 [2024-07-15 11:34:38.435741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.435753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.435762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.435834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.435840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.435843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.435867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.435877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.435945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.435951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.435954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.435966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.435972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.435978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.435987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.436058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.436063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.436066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.436078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.436090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.436099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.436167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.436172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.436175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.436186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.436199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.436208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.436278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.436284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.436287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.436298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.436310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.436319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.436388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.436396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.436399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.863 [2024-07-15 11:34:38.436410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.863 [2024-07-15 11:34:38.436422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.863 [2024-07-15 11:34:38.436431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.863 [2024-07-15 11:34:38.436499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.863 [2024-07-15 11:34:38.436504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.863 [2024-07-15 11:34:38.436507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.863 [2024-07-15 11:34:38.436511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.436519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.436531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.436539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.436612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.436617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.436620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.436631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.436644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.436653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.436757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.436762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.436765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.436777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.436789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.436799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.436868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.436873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.436878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.436889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.436901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.436910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.436979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.436984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.436987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.436991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.436999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.864 [2024-07-15 11:34:38.437789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.864 [2024-07-15 11:34:38.437860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.864 [2024-07-15 11:34:38.437866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.864 [2024-07-15 11:34:38.437869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.864 [2024-07-15 11:34:38.437880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.864 [2024-07-15 11:34:38.437887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.864 [2024-07-15 11:34:38.437892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.865 [2024-07-15 11:34:38.437901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.865 [2024-07-15 11:34:38.437966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.865 [2024-07-15 11:34:38.437972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.865 [2024-07-15 11:34:38.437975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.437978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.865 [2024-07-15 11:34:38.437986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.437989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.437993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.865 [2024-07-15 11:34:38.437998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.865 [2024-07-15 11:34:38.438007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.865 [2024-07-15 11:34:38.438078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.865 [2024-07-15 11:34:38.438084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.865 [2024-07-15 11:34:38.438087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.865 [2024-07-15 11:34:38.438098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.865 [2024-07-15 11:34:38.438110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.865 [2024-07-15 11:34:38.438119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.865 [2024-07-15 11:34:38.438187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.865 [2024-07-15 11:34:38.438193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.865 [2024-07-15 11:34:38.438196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.865 [2024-07-15 11:34:38.438207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.438216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139bec0) 00:23:54.865 [2024-07-15 11:34:38.438221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.865 [2024-07-15 11:34:38.442239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x141f2c0, cid 3, qid 0 00:23:54.865 [2024-07-15 11:34:38.442399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.865 [2024-07-15 11:34:38.442405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.865 [2024-07-15 11:34:38.442408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.865 [2024-07-15 11:34:38.442411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x141f2c0) on tqpair=0x139bec0 00:23:54.865 [2024-07-15 11:34:38.442418] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:55.133 00:23:55.133 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.133 [2024-07-15 11:34:38.478043] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:55.133 [2024-07-15 11:34:38.478076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686532 ] 00:23:55.133 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.133 [2024-07-15 11:34:38.506458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:55.133 [2024-07-15 11:34:38.506499] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.133 [2024-07-15 11:34:38.506504] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.133 [2024-07-15 11:34:38.506513] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.133 [2024-07-15 11:34:38.506519] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.133 [2024-07-15 11:34:38.506835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:55.133 [2024-07-15 11:34:38.506857] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8caec0 0 00:23:55.133 [2024-07-15 11:34:38.513236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.133 [2024-07-15 11:34:38.513248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.133 [2024-07-15 11:34:38.513252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.133 [2024-07-15 11:34:38.513255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.133 [2024-07-15 11:34:38.513282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.513286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.513289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.133 [2024-07-15 11:34:38.513299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.133 [2024-07-15 11:34:38.513313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.133 [2024-07-15 11:34:38.521236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.133 [2024-07-15 11:34:38.521245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.133 [2024-07-15 11:34:38.521251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.133 [2024-07-15 11:34:38.521271] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.133 [2024-07-15 11:34:38.521277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:55.133 [2024-07-15 11:34:38.521282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:55.133 [2024-07-15 11:34:38.521292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.133 [2024-07-15 11:34:38.521306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-15 11:34:38.521318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.133 [2024-07-15 11:34:38.521417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.133 [2024-07-15 11:34:38.521423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.133 [2024-07-15 11:34:38.521427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.133 [2024-07-15 11:34:38.521434] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:55.133 [2024-07-15 11:34:38.521440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:55.133 [2024-07-15 11:34:38.521448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.133 [2024-07-15 11:34:38.521461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.133 [2024-07-15 11:34:38.521471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.133 [2024-07-15 11:34:38.521539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.133 [2024-07-15 11:34:38.521546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.133 [2024-07-15 11:34:38.521550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.133 [2024-07-15 11:34:38.521553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.133 [2024-07-15 11:34:38.521557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:55.133 [2024-07-15 11:34:38.521564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.133 [2024-07-15 11:34:38.521570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.521584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.134 [2024-07-15 11:34:38.521594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.521661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.521668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.521672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.521682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.134 [2024-07-15 11:34:38.521689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.521703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.134 [2024-07-15 11:34:38.521713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.521777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.521783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.521786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.521793] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.134 [2024-07-15 11:34:38.521797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.134 [2024-07-15 11:34:38.521805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.134 [2024-07-15 11:34:38.521911] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:55.134 [2024-07-15 11:34:38.521914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.134 [2024-07-15 11:34:38.521921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.521927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.521933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.134 [2024-07-15 11:34:38.521942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.522009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.522014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.522017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.522024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.134 [2024-07-15 11:34:38.522031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.522044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.134 [2024-07-15 11:34:38.522053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.522139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.522144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.522147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.522156] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.134 [2024-07-15 11:34:38.522160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.522167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:55.134 [2024-07-15 11:34:38.522174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.522182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.522190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.134 [2024-07-15 11:34:38.522200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.522316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.134 [2024-07-15 11:34:38.522323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.134 [2024-07-15 11:34:38.522325] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522329] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=4096, cccid=0 00:23:55.134 [2024-07-15 11:34:38.522332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94de40) on tqpair(0x8caec0): expected_datao=0, payload_size=4096 00:23:55.134 [2024-07-15 11:34:38.522336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522355] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.522359] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.568242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.568246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.568256] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:55.134 [2024-07-15 11:34:38.568263] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:55.134 [2024-07-15 11:34:38.568267] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:55.134 [2024-07-15 11:34:38.568270] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:55.134 [2024-07-15 11:34:38.568274] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:55.134 [2024-07-15 11:34:38.568278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.568286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.568293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.568307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.134 [2024-07-15 11:34:38.568320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.134 [2024-07-15 11:34:38.568457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.134 [2024-07-15 11:34:38.568463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.134 [2024-07-15 11:34:38.568466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.134 [2024-07-15 11:34:38.568475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.568487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.134 [2024-07-15 11:34:38.568492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.568503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.134 [2024-07-15 11:34:38.568508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.568519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.134 [2024-07-15 11:34:38.568524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.134 [2024-07-15 11:34:38.568530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.134 [2024-07-15 11:34:38.568535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.134 [2024-07-15 11:34:38.568539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.568549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.134 [2024-07-15 11:34:38.568555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.568564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.135 [2024-07-15 11:34:38.568575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94de40, cid 0, qid 0 00:23:55.135 [2024-07-15 11:34:38.568580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94dfc0, cid 1, qid 0 00:23:55.135 [2024-07-15 11:34:38.568583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e140, cid 2, qid 0 00:23:55.135 [2024-07-15 11:34:38.568588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.135 [2024-07-15 11:34:38.568591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.568698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.568703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.568706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.568715] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:55.135 [2024-07-15 11:34:38.568720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.568726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.568732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.568738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.568750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.135 [2024-07-15 11:34:38.568759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.568837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.568842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.568845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.568899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.568908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.568914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.568918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.568924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.135 [2024-07-15 11:34:38.568933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.569016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.135 [2024-07-15 11:34:38.569022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.135 [2024-07-15 11:34:38.569025] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569028] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=4096, cccid=4 00:23:55.135 [2024-07-15 11:34:38.569032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e440) on tqpair(0x8caec0): expected_datao=0, payload_size=4096 00:23:55.135 [2024-07-15 11:34:38.569036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569041] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569045] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.569096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.569099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.569110] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:55.135 [2024-07-15 11:34:38.569121] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.569131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.569138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.569146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.135 [2024-07-15 11:34:38.569156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.569274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.135 [2024-07-15 11:34:38.569279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.135 [2024-07-15 11:34:38.569282] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569285] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=4096, cccid=4 00:23:55.135 [2024-07-15 11:34:38.569289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e440) on tqpair(0x8caec0): expected_datao=0, payload_size=4096 00:23:55.135 [2024-07-15 11:34:38.569292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569302] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.569306] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.614239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.614242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.614257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.614283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.135 [2024-07-15 11:34:38.614295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.614444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.135 [2024-07-15 11:34:38.614450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.135 [2024-07-15 11:34:38.614453] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614456] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=4096, cccid=4 00:23:55.135 [2024-07-15 11:34:38.614460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e440) on tqpair(0x8caec0): expected_datao=0, payload_size=4096 00:23:55.135 [2024-07-15 11:34:38.614463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614469] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614472] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.614526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.614529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.614543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614575] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.135 [2024-07-15 11:34:38.614579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:55.135 [2024-07-15 11:34:38.614583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:55.135 [2024-07-15 11:34:38.614595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.614605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.135 [2024-07-15 11:34:38.614611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8caec0) 00:23:55.135 [2024-07-15 11:34:38.614622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.135 [2024-07-15 11:34:38.614634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.135 [2024-07-15 11:34:38.614639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e5c0, cid 5, qid 0 00:23:55.135 [2024-07-15 11:34:38.614722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.614728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.614731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.135 [2024-07-15 11:34:38.614734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.135 [2024-07-15 11:34:38.614740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.135 [2024-07-15 11:34:38.614744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.135 [2024-07-15 11:34:38.614747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e5c0) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.614758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.614767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.614776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e5c0, cid 5, qid 0 00:23:55.136 [2024-07-15 11:34:38.614849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.614857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.614860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e5c0) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.614870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.614879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.614888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e5c0, cid 5, qid 0 00:23:55.136 [2024-07-15 11:34:38.614958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.614964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.614967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e5c0) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.614977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.614980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.614986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.614994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e5c0, cid 5, qid 0 00:23:55.136 [2024-07-15 11:34:38.615098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.615103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.615106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e5c0) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.615122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.615131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.615137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.615145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.615151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.615160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.615166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8caec0) 00:23:55.136 [2024-07-15 11:34:38.615174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.136 [2024-07-15 11:34:38.615184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e5c0, cid 5, qid 0 00:23:55.136 [2024-07-15 11:34:38.615189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e440, cid 4, qid 0 00:23:55.136 [2024-07-15 11:34:38.615194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e740, cid 6, qid 0 00:23:55.136 [2024-07-15 11:34:38.615198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e8c0, cid 7, qid 0 00:23:55.136 [2024-07-15 11:34:38.615367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.136 [2024-07-15 11:34:38.615373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.136 [2024-07-15 11:34:38.615376] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615379] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=8192, cccid=5 00:23:55.136 [2024-07-15 11:34:38.615383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e5c0) on tqpair(0x8caec0): expected_datao=0, payload_size=8192 00:23:55.136 [2024-07-15 11:34:38.615387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615393] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615396] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.136 [2024-07-15 11:34:38.615405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.136 [2024-07-15 11:34:38.615408] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615411] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=512, cccid=4 00:23:55.136 [2024-07-15 11:34:38.615415] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e440) on tqpair(0x8caec0): expected_datao=0, payload_size=512 00:23:55.136 [2024-07-15 11:34:38.615419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615424] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615427] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.136 [2024-07-15 11:34:38.615436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.136 [2024-07-15 11:34:38.615439] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615442] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=512, cccid=6 00:23:55.136 [2024-07-15 11:34:38.615446] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e740) on tqpair(0x8caec0): expected_datao=0, payload_size=512 00:23:55.136 [2024-07-15 11:34:38.615449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615454] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615457] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.136 [2024-07-15 11:34:38.615467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.136 [2024-07-15 11:34:38.615470] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8caec0): datao=0, datal=4096, cccid=7 00:23:55.136 [2024-07-15 11:34:38.615476] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94e8c0) on tqpair(0x8caec0): expected_datao=0, payload_size=4096 00:23:55.136 [2024-07-15 11:34:38.615480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615485] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615488] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.615500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.615503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e5c0) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.615518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.615523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.615526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e440) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.615537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.615542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.615545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e740) on tqpair=0x8caec0 00:23:55.136 [2024-07-15 11:34:38.615554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.136 [2024-07-15 11:34:38.615559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.136 [2024-07-15 11:34:38.615562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.136 [2024-07-15 11:34:38.615565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e8c0) on tqpair=0x8caec0 00:23:55.136 ===================================================== 00:23:55.136 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.136 ===================================================== 00:23:55.136 Controller Capabilities/Features 00:23:55.136 ================================ 00:23:55.136 Vendor ID: 8086 00:23:55.136 Subsystem Vendor ID: 8086 00:23:55.136 Serial Number: SPDK00000000000001 00:23:55.136 Model Number: SPDK bdev Controller 00:23:55.136 Firmware Version: 24.09 00:23:55.136 Recommended Arb Burst: 6 00:23:55.136 IEEE OUI Identifier: e4 d2 5c 00:23:55.136 Multi-path I/O 00:23:55.136 May have multiple subsystem ports: Yes 00:23:55.136 May have multiple controllers: Yes 00:23:55.136 Associated with SR-IOV VF: No 00:23:55.136 Max Data Transfer Size: 131072 00:23:55.136 Max Number of Namespaces: 32 00:23:55.136 Max Number of I/O Queues: 127 00:23:55.136 NVMe Specification Version (VS): 1.3 00:23:55.136 NVMe Specification Version (Identify): 1.3 00:23:55.136 Maximum Queue Entries: 128 00:23:55.136 Contiguous Queues Required: Yes 00:23:55.136 Arbitration Mechanisms Supported 00:23:55.136 Weighted Round Robin: Not Supported 00:23:55.136 Vendor Specific: Not Supported 00:23:55.136 Reset Timeout: 15000 ms 00:23:55.136 Doorbell Stride: 4 bytes 00:23:55.136 NVM Subsystem Reset: Not Supported 00:23:55.136 Command Sets Supported 00:23:55.136 NVM Command Set: Supported 00:23:55.136 Boot Partition: Not Supported 00:23:55.136 Memory Page Size Minimum: 4096 bytes 00:23:55.136 Memory Page Size Maximum: 4096 bytes 00:23:55.136 Persistent Memory Region: Not Supported 00:23:55.136 Optional Asynchronous Events Supported 00:23:55.136 Namespace Attribute Notices: Supported 00:23:55.136 Firmware Activation Notices: Not Supported 00:23:55.136 ANA Change Notices: Not Supported 00:23:55.136 PLE Aggregate Log Change Notices: Not Supported 00:23:55.136 LBA Status Info Alert Notices: Not Supported 00:23:55.137 EGE Aggregate Log Change Notices: Not Supported 00:23:55.137 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.137 Zone Descriptor Change Notices: Not Supported 00:23:55.137 Discovery Log Change Notices: Not Supported 00:23:55.137 Controller Attributes 00:23:55.137 128-bit Host Identifier: Supported 00:23:55.137 Non-Operational Permissive Mode: Not Supported 00:23:55.137 NVM Sets: Not Supported 00:23:55.137 Read Recovery Levels: Not Supported 00:23:55.137 Endurance Groups: Not Supported 00:23:55.137 Predictable Latency Mode: Not Supported 00:23:55.137 Traffic Based Keep ALive: Not Supported 00:23:55.137 Namespace Granularity: Not Supported 00:23:55.137 SQ Associations: Not Supported 00:23:55.137 UUID List: Not Supported 00:23:55.137 Multi-Domain Subsystem: Not Supported 00:23:55.137 Fixed Capacity Management: Not Supported 00:23:55.137 Variable Capacity Management: Not Supported 00:23:55.137 Delete Endurance Group: Not Supported 00:23:55.137 Delete NVM Set: Not Supported 00:23:55.137 Extended LBA Formats Supported: Not Supported 00:23:55.137 Flexible Data Placement Supported: Not Supported 00:23:55.137 00:23:55.137 Controller Memory Buffer Support 00:23:55.137 ================================ 00:23:55.137 Supported: No 00:23:55.137 00:23:55.137 Persistent Memory Region Support 00:23:55.137 ================================ 00:23:55.137 Supported: No 00:23:55.137 00:23:55.137 Admin Command Set Attributes 00:23:55.137 ============================ 00:23:55.137 Security Send/Receive: Not Supported 00:23:55.137 Format NVM: Not Supported 00:23:55.137 Firmware Activate/Download: Not Supported 00:23:55.137 Namespace Management: Not Supported 00:23:55.137 Device Self-Test: Not Supported 00:23:55.137 Directives: Not Supported 00:23:55.137 NVMe-MI: Not Supported 00:23:55.137 Virtualization Management: Not Supported 00:23:55.137 Doorbell Buffer Config: Not Supported 00:23:55.137 Get LBA Status Capability: Not Supported 00:23:55.137 Command & Feature Lockdown Capability: Not Supported 00:23:55.137 Abort Command Limit: 4 00:23:55.137 Async Event Request Limit: 4 00:23:55.137 Number of Firmware Slots: N/A 00:23:55.137 Firmware Slot 1 Read-Only: N/A 00:23:55.137 Firmware Activation Without Reset: N/A 00:23:55.137 Multiple Update Detection Support: N/A 00:23:55.137 Firmware Update Granularity: No Information Provided 00:23:55.137 Per-Namespace SMART Log: No 00:23:55.137 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.137 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.137 Command Effects Log Page: Supported 00:23:55.137 Get Log Page Extended Data: Supported 00:23:55.137 Telemetry Log Pages: Not Supported 00:23:55.137 Persistent Event Log Pages: Not Supported 00:23:55.137 Supported Log Pages Log Page: May Support 00:23:55.137 Commands Supported & Effects Log Page: Not Supported 00:23:55.137 Feature Identifiers & Effects Log Page:May Support 00:23:55.137 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.137 Data Area 4 for Telemetry Log: Not Supported 00:23:55.137 Error Log Page Entries Supported: 128 00:23:55.137 Keep Alive: Supported 00:23:55.137 Keep Alive Granularity: 10000 ms 00:23:55.137 00:23:55.137 NVM Command Set Attributes 00:23:55.137 ========================== 00:23:55.137 Submission Queue Entry Size 00:23:55.137 Max: 64 00:23:55.137 Min: 64 00:23:55.137 Completion Queue Entry Size 00:23:55.137 Max: 16 00:23:55.137 Min: 16 00:23:55.137 Number of Namespaces: 32 00:23:55.137 Compare Command: Supported 00:23:55.137 Write Uncorrectable Command: Not Supported 00:23:55.137 Dataset Management Command: Supported 00:23:55.137 Write Zeroes Command: Supported 00:23:55.137 Set Features Save Field: Not Supported 00:23:55.137 Reservations: Supported 00:23:55.137 Timestamp: Not Supported 00:23:55.137 Copy: Supported 00:23:55.137 Volatile Write Cache: Present 00:23:55.137 Atomic Write Unit (Normal): 1 00:23:55.137 Atomic Write Unit (PFail): 1 00:23:55.137 Atomic Compare & Write Unit: 1 00:23:55.137 Fused Compare & Write: Supported 00:23:55.137 Scatter-Gather List 00:23:55.137 SGL Command Set: Supported 00:23:55.137 SGL Keyed: Supported 00:23:55.137 SGL Bit Bucket Descriptor: Not Supported 00:23:55.137 SGL Metadata Pointer: Not Supported 00:23:55.137 Oversized SGL: Not Supported 00:23:55.137 SGL Metadata Address: Not Supported 00:23:55.137 SGL Offset: Supported 00:23:55.137 Transport SGL Data Block: Not Supported 00:23:55.137 Replay Protected Memory Block: Not Supported 00:23:55.137 00:23:55.137 Firmware Slot Information 00:23:55.137 ========================= 00:23:55.137 Active slot: 1 00:23:55.137 Slot 1 Firmware Revision: 24.09 00:23:55.137 00:23:55.137 00:23:55.137 Commands Supported and Effects 00:23:55.137 ============================== 00:23:55.137 Admin Commands 00:23:55.137 -------------- 00:23:55.137 Get Log Page (02h): Supported 00:23:55.137 Identify (06h): Supported 00:23:55.137 Abort (08h): Supported 00:23:55.137 Set Features (09h): Supported 00:23:55.137 Get Features (0Ah): Supported 00:23:55.137 Asynchronous Event Request (0Ch): Supported 00:23:55.137 Keep Alive (18h): Supported 00:23:55.137 I/O Commands 00:23:55.137 ------------ 00:23:55.137 Flush (00h): Supported LBA-Change 00:23:55.137 Write (01h): Supported LBA-Change 00:23:55.137 Read (02h): Supported 00:23:55.137 Compare (05h): Supported 00:23:55.137 Write Zeroes (08h): Supported LBA-Change 00:23:55.137 Dataset Management (09h): Supported LBA-Change 00:23:55.137 Copy (19h): Supported LBA-Change 00:23:55.137 00:23:55.137 Error Log 00:23:55.137 ========= 00:23:55.137 00:23:55.137 Arbitration 00:23:55.137 =========== 00:23:55.137 Arbitration Burst: 1 00:23:55.137 00:23:55.137 Power Management 00:23:55.137 ================ 00:23:55.137 Number of Power States: 1 00:23:55.137 Current Power State: Power State #0 00:23:55.137 Power State #0: 00:23:55.137 Max Power: 0.00 W 00:23:55.137 Non-Operational State: Operational 00:23:55.137 Entry Latency: Not Reported 00:23:55.137 Exit Latency: Not Reported 00:23:55.137 Relative Read Throughput: 0 00:23:55.137 Relative Read Latency: 0 00:23:55.137 Relative Write Throughput: 0 00:23:55.137 Relative Write Latency: 0 00:23:55.137 Idle Power: Not Reported 00:23:55.137 Active Power: Not Reported 00:23:55.137 Non-Operational Permissive Mode: Not Supported 00:23:55.137 00:23:55.137 Health Information 00:23:55.137 ================== 00:23:55.137 Critical Warnings: 00:23:55.137 Available Spare Space: OK 00:23:55.137 Temperature: OK 00:23:55.137 Device Reliability: OK 00:23:55.137 Read Only: No 00:23:55.137 Volatile Memory Backup: OK 00:23:55.137 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.137 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:55.137 Available Spare: 0% 00:23:55.137 Available Spare Threshold: 0% 00:23:55.137 Life Percentage Used:[2024-07-15 11:34:38.615647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.137 [2024-07-15 11:34:38.615651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8caec0) 00:23:55.137 [2024-07-15 11:34:38.615657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.137 [2024-07-15 11:34:38.615669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e8c0, cid 7, qid 0 00:23:55.137 [2024-07-15 11:34:38.615765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.137 [2024-07-15 11:34:38.615770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.137 [2024-07-15 11:34:38.615773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.137 [2024-07-15 11:34:38.615777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e8c0) on tqpair=0x8caec0 00:23:55.137 [2024-07-15 11:34:38.615805] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:55.137 [2024-07-15 11:34:38.615814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94de40) on tqpair=0x8caec0 00:23:55.137 [2024-07-15 11:34:38.615819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.138 [2024-07-15 11:34:38.615824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94dfc0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.615828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.138 [2024-07-15 11:34:38.615832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e140) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.615836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.138 [2024-07-15 11:34:38.615840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.615843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.138 [2024-07-15 11:34:38.615850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.615853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.615856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.615862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.615873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.615944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.615949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.615954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.615957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.615962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.615966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.615969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.615974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.615986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616083] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:55.138 [2024-07-15 11:34:38.616087] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:55.138 [2024-07-15 11:34:38.616094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.138 [2024-07-15 11:34:38.616663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.138 [2024-07-15 11:34:38.616734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.138 [2024-07-15 11:34:38.616740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.138 [2024-07-15 11:34:38.616742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.138 [2024-07-15 11:34:38.616753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.138 [2024-07-15 11:34:38.616760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.138 [2024-07-15 11:34:38.616765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.616774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.616844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.616851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.616854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.616865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.616877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.616886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.616954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.616960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.616963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.616973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.616980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.616985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.616994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.617891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.617900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.617970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.617975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.617978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.617989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.617995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.618001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.618010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.618088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.618093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.618096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.618099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.139 [2024-07-15 11:34:38.618107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.618110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.618113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.139 [2024-07-15 11:34:38.618119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.139 [2024-07-15 11:34:38.618127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.139 [2024-07-15 11:34:38.618204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.139 [2024-07-15 11:34:38.618209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.139 [2024-07-15 11:34:38.618212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.139 [2024-07-15 11:34:38.618215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.618891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.618896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.618899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.618910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.618917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.618922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.618931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.619007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.619012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.619015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.619026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.619038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.619047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.619123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.619129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.619131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.619142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.619149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.619155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.619165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.623233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.623241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.623244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.623247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.623257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.623260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.623263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8caec0) 00:23:55.140 [2024-07-15 11:34:38.623269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.140 [2024-07-15 11:34:38.623281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94e2c0, cid 3, qid 0 00:23:55.140 [2024-07-15 11:34:38.623353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.140 [2024-07-15 11:34:38.623359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.140 [2024-07-15 11:34:38.623362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.140 [2024-07-15 11:34:38.623365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94e2c0) on tqpair=0x8caec0 00:23:55.140 [2024-07-15 11:34:38.623371] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:55.140 0% 00:23:55.140 Data Units Read: 0 00:23:55.140 Data Units Written: 0 00:23:55.140 Host Read Commands: 0 00:23:55.140 Host Write Commands: 0 00:23:55.140 Controller Busy Time: 0 minutes 00:23:55.140 Power Cycles: 0 00:23:55.140 Power On Hours: 0 hours 00:23:55.140 Unsafe Shutdowns: 0 00:23:55.140 Unrecoverable Media Errors: 0 00:23:55.140 Lifetime Error Log Entries: 0 00:23:55.140 Warning Temperature Time: 0 minutes 00:23:55.141 Critical Temperature Time: 0 minutes 00:23:55.141 00:23:55.141 Number of Queues 00:23:55.141 ================ 00:23:55.141 Number of I/O Submission Queues: 127 00:23:55.141 Number of I/O Completion Queues: 127 00:23:55.141 00:23:55.141 Active Namespaces 00:23:55.141 ================= 00:23:55.141 Namespace ID:1 00:23:55.141 Error Recovery Timeout: Unlimited 00:23:55.141 Command Set Identifier: NVM (00h) 00:23:55.141 Deallocate: Supported 00:23:55.141 Deallocated/Unwritten Error: Not Supported 00:23:55.141 Deallocated Read Value: Unknown 00:23:55.141 Deallocate in Write Zeroes: Not Supported 00:23:55.141 Deallocated Guard Field: 0xFFFF 00:23:55.141 Flush: Supported 00:23:55.141 Reservation: Supported 00:23:55.141 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.141 Size (in LBAs): 131072 (0GiB) 00:23:55.141 Capacity (in LBAs): 131072 (0GiB) 00:23:55.141 Utilization (in LBAs): 131072 (0GiB) 00:23:55.141 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.141 EUI64: ABCDEF0123456789 00:23:55.141 UUID: 351f6854-475f-490b-a517-31be250b1fca 00:23:55.141 Thin Provisioning: Not Supported 00:23:55.141 Per-NS Atomic Units: Yes 00:23:55.141 Atomic Boundary Size (Normal): 0 00:23:55.141 Atomic Boundary Size (PFail): 0 00:23:55.141 Atomic Boundary Offset: 0 00:23:55.141 Maximum Single Source Range Length: 65535 00:23:55.141 Maximum Copy Length: 65535 00:23:55.141 Maximum Source Range Count: 1 00:23:55.141 NGUID/EUI64 Never Reused: No 00:23:55.141 Namespace Write Protected: No 00:23:55.141 Number of LBA Formats: 1 00:23:55.141 Current LBA Format: LBA Format #00 00:23:55.141 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.141 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.141 rmmod nvme_tcp 00:23:55.141 rmmod nvme_fabrics 00:23:55.141 rmmod nvme_keyring 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 686285 ']' 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 686285 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 686285 ']' 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 686285 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.141 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 686285 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 686285' 00:23:55.452 killing process with pid 686285 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 686285 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 686285 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.452 11:34:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.990 11:34:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.990 00:23:57.990 real 0m9.639s 00:23:57.990 user 0m7.605s 00:23:57.990 sys 0m4.726s 00:23:57.990 11:34:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.990 11:34:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:57.990 ************************************ 00:23:57.990 END TEST nvmf_identify 00:23:57.990 ************************************ 00:23:57.990 11:34:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:57.990 11:34:41 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.990 11:34:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:57.990 11:34:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.990 11:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.990 ************************************ 00:23:57.990 START TEST nvmf_perf 00:23:57.990 ************************************ 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.990 * Looking for test storage... 00:23:57.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.990 11:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.298 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.298 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:03.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:03.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:03.299 Found net devices under 0000:86:00.0: cvl_0_0 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:03.299 Found net devices under 0000:86:00.1: cvl_0_1 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.299 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:24:03.558 00:24:03.558 --- 10.0.0.2 ping statistics --- 00:24:03.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.558 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:24:03.558 00:24:03.558 --- 10.0.0.1 ping statistics --- 00:24:03.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.558 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=690051 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 690051 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 690051 ']' 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.558 11:34:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.558 [2024-07-15 11:34:47.013629] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:03.558 [2024-07-15 11:34:47.013671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.558 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.558 [2024-07-15 11:34:47.073644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.817 [2024-07-15 11:34:47.152458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.817 [2024-07-15 11:34:47.152495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.817 [2024-07-15 11:34:47.152501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.817 [2024-07-15 11:34:47.152507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.817 [2024-07-15 11:34:47.152512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.817 [2024-07-15 11:34:47.152567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.817 [2024-07-15 11:34:47.152675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.817 [2024-07-15 11:34:47.152781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.817 [2024-07-15 11:34:47.152782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:04.384 11:34:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.671 11:34:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.671 11:34:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.671 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:07.671 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.935 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.936 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:07.936 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.936 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.936 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.936 [2024-07-15 11:34:51.456507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.936 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.197 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.197 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.457 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.457 11:34:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.457 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.716 [2024-07-15 11:34:52.183241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.716 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.975 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:08.975 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:08.975 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.975 11:34:52 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:10.352 Initializing NVMe Controllers 00:24:10.352 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:10.352 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:10.352 Initialization complete. Launching workers. 00:24:10.352 ======================================================== 00:24:10.352 Latency(us) 00:24:10.352 Device Information : IOPS MiB/s Average min max 00:24:10.352 PCIE (0000:5e:00.0) NSID 1 from core 0: 97910.02 382.46 326.39 32.47 4292.66 00:24:10.352 ======================================================== 00:24:10.352 Total : 97910.02 382.46 326.39 32.47 4292.66 00:24:10.352 00:24:10.352 11:34:53 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:10.352 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.287 Initializing NVMe Controllers 00:24:11.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.287 Initialization complete. Launching workers. 00:24:11.287 ======================================================== 00:24:11.287 Latency(us) 00:24:11.287 Device Information : IOPS MiB/s Average min max 00:24:11.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.66 0.38 10592.23 126.40 44953.44 00:24:11.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.80 0.22 17919.75 7962.96 47884.20 00:24:11.287 ======================================================== 00:24:11.287 Total : 152.46 0.60 13274.20 126.40 47884.20 00:24:11.288 00:24:11.288 11:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.546 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.923 Initializing NVMe Controllers 00:24:12.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.923 Initialization complete. Launching workers. 00:24:12.923 ======================================================== 00:24:12.923 Latency(us) 00:24:12.923 Device Information : IOPS MiB/s Average min max 00:24:12.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10888.64 42.53 2939.00 435.00 8906.62 00:24:12.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.04 15.05 8461.21 6260.43 47860.00 00:24:12.923 ======================================================== 00:24:12.923 Total : 14742.68 57.59 4382.62 435.00 47860.00 00:24:12.923 00:24:12.923 11:34:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:12.923 11:34:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:12.923 11:34:56 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.923 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.456 Initializing NVMe Controllers 00:24:15.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.457 Controller IO queue size 128, less than required. 00:24:15.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.457 Controller IO queue size 128, less than required. 00:24:15.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.457 Initialization complete. Launching workers. 00:24:15.457 ======================================================== 00:24:15.457 Latency(us) 00:24:15.457 Device Information : IOPS MiB/s Average min max 00:24:15.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.63 429.66 75379.49 50353.11 104684.27 00:24:15.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.87 151.47 220460.92 69347.35 330218.72 00:24:15.457 ======================================================== 00:24:15.457 Total : 2324.51 581.13 113194.26 50353.11 330218.72 00:24:15.457 00:24:15.457 11:34:58 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.457 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.457 No valid NVMe controllers or AIO or URING devices found 00:24:15.457 Initializing NVMe Controllers 00:24:15.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.457 Controller IO queue size 128, less than required. 00:24:15.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.457 Controller IO queue size 128, less than required. 00:24:15.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.457 WARNING: Some requested NVMe devices were skipped 00:24:15.457 11:34:58 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:15.457 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.988 Initializing NVMe Controllers 00:24:17.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.988 Controller IO queue size 128, less than required. 00:24:17.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.988 Controller IO queue size 128, less than required. 00:24:17.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.988 Initialization complete. Launching workers. 00:24:17.988 00:24:17.988 ==================== 00:24:17.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:17.988 TCP transport: 00:24:17.988 polls: 15983 00:24:17.988 idle_polls: 9212 00:24:17.988 sock_completions: 6771 00:24:17.988 nvme_completions: 6393 00:24:17.988 submitted_requests: 9612 00:24:17.988 queued_requests: 1 00:24:17.988 00:24:17.988 ==================== 00:24:17.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:17.988 TCP transport: 00:24:17.988 polls: 20133 00:24:17.988 idle_polls: 12087 00:24:17.988 sock_completions: 8046 00:24:17.988 nvme_completions: 6693 00:24:17.988 submitted_requests: 10088 00:24:17.988 queued_requests: 1 00:24:17.988 ======================================================== 00:24:17.988 Latency(us) 00:24:17.988 Device Information : IOPS MiB/s Average min max 00:24:17.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1594.59 398.65 81715.59 40624.66 132280.56 00:24:17.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1669.43 417.36 77660.91 33749.06 112107.72 00:24:17.988 ======================================================== 00:24:17.988 Total : 3264.01 816.00 79641.77 33749.06 132280.56 00:24:17.988 00:24:17.988 11:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:17.988 11:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.247 rmmod nvme_tcp 00:24:18.247 rmmod nvme_fabrics 00:24:18.247 rmmod nvme_keyring 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 690051 ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 690051 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 690051 ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 690051 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 690051 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 690051' 00:24:18.247 killing process with pid 690051 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 690051 00:24:18.247 11:35:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 690051 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.149 11:35:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.084 11:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.084 00:24:22.084 real 0m24.285s 00:24:22.084 user 1m4.551s 00:24:22.084 sys 0m7.647s 00:24:22.084 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:22.084 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.084 ************************************ 00:24:22.084 END TEST nvmf_perf 00:24:22.084 ************************************ 00:24:22.084 11:35:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:22.084 11:35:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.084 11:35:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:22.084 11:35:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.084 11:35:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.084 ************************************ 00:24:22.084 START TEST nvmf_fio_host 00:24:22.084 ************************************ 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.084 * Looking for test storage... 00:24:22.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.084 11:35:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.085 11:35:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.652 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:28.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:28.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:28.653 Found net devices under 0000:86:00.0: cvl_0_0 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:28.653 Found net devices under 0000:86:00.1: cvl_0_1 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:28.653 00:24:28.653 --- 10.0.0.2 ping statistics --- 00:24:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.653 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:24:28.653 00:24:28.653 --- 10.0.0.1 ping statistics --- 00:24:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.653 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=696150 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 696150 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 696150 ']' 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.653 11:35:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.653 [2024-07-15 11:35:11.386142] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:28.653 [2024-07-15 11:35:11.386182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.653 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.653 [2024-07-15 11:35:11.457879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.653 [2024-07-15 11:35:11.542205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.653 [2024-07-15 11:35:11.542245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.653 [2024-07-15 11:35:11.542253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.653 [2024-07-15 11:35:11.542259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.653 [2024-07-15 11:35:11.542264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.653 [2024-07-15 11:35:11.542312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.653 [2024-07-15 11:35:11.542438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.653 [2024-07-15 11:35:11.542492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.653 [2024-07-15 11:35:11.542493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.653 11:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.653 11:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:28.653 11:35:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.911 [2024-07-15 11:35:12.365487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.911 11:35:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:28.911 11:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.911 11:35:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.911 11:35:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:29.169 Malloc1 00:24:29.169 11:35:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.426 11:35:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:29.426 11:35:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.684 [2024-07-15 11:35:13.159928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.684 11:35:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.941 11:35:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:29.942 11:35:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:30.199 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:30.199 fio-3.35 00:24:30.199 Starting 1 thread 00:24:30.199 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.726 00:24:32.726 test: (groupid=0, jobs=1): err= 0: pid=696636: Mon Jul 15 11:35:15 2024 00:24:32.726 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2006msec) 00:24:32.726 slat (nsec): min=1590, max=254061, avg=1753.08, stdev=2272.82 00:24:32.726 clat (usec): min=3230, max=10470, avg=6015.25, stdev=427.54 00:24:32.726 lat (usec): min=3262, max=10472, avg=6017.01, stdev=427.48 00:24:32.726 clat percentiles (usec): 00:24:32.726 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:24:32.726 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:24:32.726 | 70.00th=[ 6259], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:24:32.726 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9110], 00:24:32.726 | 99.99th=[10421] 00:24:32.726 bw ( KiB/s): min=46040, max=47896, per=100.00%, avg=47110.00, stdev=813.02, samples=4 00:24:32.726 iops : min=11510, max=11974, avg=11777.50, stdev=203.26, samples=4 00:24:32.726 write: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(91.8MiB/2006msec); 0 zone resets 00:24:32.726 slat (nsec): min=1654, max=238092, avg=1831.90, stdev=1721.44 00:24:32.726 clat (usec): min=2489, max=9650, avg=4826.60, stdev=375.04 00:24:32.726 lat (usec): min=2504, max=9651, avg=4828.44, stdev=375.03 00:24:32.726 clat percentiles (usec): 00:24:32.726 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:24:32.726 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:24:32.726 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:24:32.726 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7832], 99.95th=[ 8586], 00:24:32.726 | 99.99th=[ 9634] 00:24:32.726 bw ( KiB/s): min=46584, max=47312, per=100.00%, avg=46850.00, stdev=321.29, samples=4 00:24:32.726 iops : min=11646, max=11828, avg=11712.50, stdev=80.32, samples=4 00:24:32.726 lat (msec) : 4=0.57%, 10=99.42%, 20=0.01% 00:24:32.726 cpu : usr=73.07%, sys=25.09%, ctx=81, majf=0, minf=6 00:24:32.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:32.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:32.726 issued rwts: total=23616,23491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:32.726 00:24:32.726 Run status group 0 (all jobs): 00:24:32.726 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2006-2006msec 00:24:32.726 WRITE: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=91.8MiB (96.2MB), run=2006-2006msec 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:32.726 11:35:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:32.726 11:35:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:32.726 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:32.726 fio-3.35 00:24:32.726 Starting 1 thread 00:24:32.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.513 00:24:35.513 test: (groupid=0, jobs=1): err= 0: pid=697106: Mon Jul 15 11:35:18 2024 00:24:35.513 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(334MiB/2005msec) 00:24:35.513 slat (nsec): min=2562, max=86391, avg=2890.13, stdev=1331.61 00:24:35.513 clat (usec): min=1829, max=50822, avg=7120.74, stdev=3508.27 00:24:35.513 lat (usec): min=1832, max=50825, avg=7123.63, stdev=3508.33 00:24:35.513 clat percentiles (usec): 00:24:35.513 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5407], 00:24:35.513 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7373], 00:24:35.513 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[10159], 00:24:35.513 | 99.00th=[12518], 99.50th=[44827], 99.90th=[49546], 99.95th=[50594], 00:24:35.513 | 99.99th=[50594] 00:24:35.513 bw ( KiB/s): min=73024, max=98112, per=50.42%, avg=86000.00, stdev=10254.81, samples=4 00:24:35.513 iops : min= 4564, max= 6132, avg=5375.00, stdev=640.93, samples=4 00:24:35.513 write: IOPS=6452, BW=101MiB/s (106MB/s)(176MiB/1744msec); 0 zone resets 00:24:35.513 slat (usec): min=29, max=388, avg=32.24, stdev= 7.71 00:24:35.513 clat (usec): min=4867, max=15902, avg=8567.57, stdev=1504.24 00:24:35.513 lat (usec): min=4898, max=15934, avg=8599.80, stdev=1506.04 00:24:35.513 clat percentiles (usec): 00:24:35.513 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:24:35.513 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:35.513 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:24:35.513 | 99.00th=[12911], 99.50th=[13566], 99.90th=[14877], 99.95th=[15139], 00:24:35.513 | 99.99th=[15795] 00:24:35.513 bw ( KiB/s): min=77312, max=102400, per=86.59%, avg=89400.00, stdev=10256.07, samples=4 00:24:35.513 iops : min= 4832, max= 6400, avg=5587.50, stdev=641.00, samples=4 00:24:35.513 lat (msec) : 2=0.01%, 4=1.43%, 10=89.44%, 20=8.73%, 50=0.35% 00:24:35.513 lat (msec) : 100=0.04% 00:24:35.513 cpu : usr=84.93%, sys=13.92%, ctx=52, majf=0, minf=3 00:24:35.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:35.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:35.513 issued rwts: total=21374,11254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:35.513 00:24:35.513 Run status group 0 (all jobs): 00:24:35.513 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=334MiB (350MB), run=2005-2005msec 00:24:35.513 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=176MiB (184MB), run=1744-1744msec 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.513 rmmod nvme_tcp 00:24:35.513 rmmod nvme_fabrics 00:24:35.513 rmmod nvme_keyring 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 696150 ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 696150 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 696150 ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 696150 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696150 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696150' 00:24:35.513 killing process with pid 696150 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 696150 00:24:35.513 11:35:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 696150 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.513 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.052 11:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.052 00:24:38.052 real 0m15.712s 00:24:38.052 user 0m46.750s 00:24:38.052 sys 0m6.315s 00:24:38.052 11:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.052 11:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.052 ************************************ 00:24:38.052 END TEST nvmf_fio_host 00:24:38.052 ************************************ 00:24:38.052 11:35:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:38.052 11:35:21 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:38.052 11:35:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:38.052 11:35:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.052 11:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.052 ************************************ 00:24:38.052 START TEST nvmf_failover 00:24:38.052 ************************************ 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:38.052 * Looking for test storage... 00:24:38.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:38.052 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.053 11:35:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.348 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.348 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.349 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.349 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.608 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.608 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.608 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.608 11:35:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:24:43.608 00:24:43.608 --- 10.0.0.2 ping statistics --- 00:24:43.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.608 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:24:43.608 00:24:43.608 --- 10.0.0.1 ping statistics --- 00:24:43.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.608 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=701067 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 701067 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 701067 ']' 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.608 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.608 [2024-07-15 11:35:27.183812] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:43.608 [2024-07-15 11:35:27.183851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.867 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.867 [2024-07-15 11:35:27.253905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:43.867 [2024-07-15 11:35:27.332597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.867 [2024-07-15 11:35:27.332629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.867 [2024-07-15 11:35:27.332637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.867 [2024-07-15 11:35:27.332643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.867 [2024-07-15 11:35:27.332648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.867 [2024-07-15 11:35:27.332756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.867 [2024-07-15 11:35:27.332861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.867 [2024-07-15 11:35:27.332862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.431 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.431 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:44.431 11:35:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.431 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.431 11:35:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:44.687 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.687 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:44.687 [2024-07-15 11:35:28.192866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.687 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:44.945 Malloc0 00:24:44.945 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.203 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.461 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.461 [2024-07-15 11:35:28.959786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.461 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:45.719 [2024-07-15 11:35:29.140289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:45.719 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:45.977 [2024-07-15 11:35:29.312864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=701331 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 701331 /var/tmp/bdevperf.sock 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 701331 ']' 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.977 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.910 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.910 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:46.910 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.170 NVMe0n1 00:24:47.170 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.492 00:24:47.492 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.492 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=701627 00:24:47.492 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:48.429 11:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.688 [2024-07-15 11:35:32.161196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 [2024-07-15 11:35:32.161482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16080 is same with the state(5) to be set 00:24:48.688 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:52.000 11:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.000 00:24:52.000 11:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:52.258 [2024-07-15 11:35:35.686727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.258 [2024-07-15 11:35:35.686858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.686997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 [2024-07-15 11:35:35.687058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16f20 is same with the state(5) to be set 00:24:52.259 11:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:55.543 11:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.543 [2024-07-15 11:35:38.885522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.543 11:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:56.479 11:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:56.738 11:35:40 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 701627 00:25:03.312 0 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 701331 ']' 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701331' 00:25:03.312 killing process with pid 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 701331 00:25:03.312 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:03.312 [2024-07-15 11:35:29.372868] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:03.312 [2024-07-15 11:35:29.372918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701331 ] 00:25:03.312 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.312 [2024-07-15 11:35:29.440037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.312 [2024-07-15 11:35:29.515095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.312 Running I/O for 15 seconds... 00:25:03.312 [2024-07-15 11:35:32.162580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.312 [2024-07-15 11:35:32.162758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.312 [2024-07-15 11:35:32.162765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.162988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.162994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.313 [2024-07-15 11:35:32.163396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.313 [2024-07-15 11:35:32.163433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.313 [2024-07-15 11:35:32.163441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.163988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.163995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.164010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.164024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.164039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.164053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.314 [2024-07-15 11:35:32.164067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.314 [2024-07-15 11:35:32.164075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.315 [2024-07-15 11:35:32.164082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.315 [2024-07-15 11:35:32.164097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.315 [2024-07-15 11:35:32.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.315 [2024-07-15 11:35:32.164724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.315 [2024-07-15 11:35:32.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:25:03.315 [2024-07-15 11:35:32.164738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.315 [2024-07-15 11:35:32.164745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.316 [2024-07-15 11:35:32.164897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.316 [2024-07-15 11:35:32.164903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:25:03.316 [2024-07-15 11:35:32.164910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164951] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc6b3b0 was disconnected and freed. reset controller. 00:25:03.316 [2024-07-15 11:35:32.164960] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:03.316 [2024-07-15 11:35:32.164981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.316 [2024-07-15 11:35:32.164989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.164997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.316 [2024-07-15 11:35:32.165008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.165015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.316 [2024-07-15 11:35:32.165023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.165030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.316 [2024-07-15 11:35:32.172290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:32.172303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.316 [2024-07-15 11:35:32.172340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d540 (9): Bad file descriptor 00:25:03.316 [2024-07-15 11:35:32.175149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.316 [2024-07-15 11:35:32.246610] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.316 [2024-07-15 11:35:35.688238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.316 [2024-07-15 11:35:35.688549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.316 [2024-07-15 11:35:35.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.688990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.688998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.689005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.689021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.689039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.317 [2024-07-15 11:35:35.689053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.317 [2024-07-15 11:35:35.689216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.317 [2024-07-15 11:35:35.689223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.318 [2024-07-15 11:35:35.689798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.318 [2024-07-15 11:35:35.689842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30488 len:8 PRP1 0x0 PRP2 0x0 00:25:03.318 [2024-07-15 11:35:35.689848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.318 [2024-07-15 11:35:35.689864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.318 [2024-07-15 11:35:35.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30496 len:8 PRP1 0x0 PRP2 0x0 00:25:03.318 [2024-07-15 11:35:35.689876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.318 [2024-07-15 11:35:35.689890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.318 [2024-07-15 11:35:35.689896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30504 len:8 PRP1 0x0 PRP2 0x0 00:25:03.318 [2024-07-15 11:35:35.689902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.318 [2024-07-15 11:35:35.689910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.318 [2024-07-15 11:35:35.689915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.318 [2024-07-15 11:35:35.689921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30512 len:8 PRP1 0x0 PRP2 0x0 00:25:03.318 [2024-07-15 11:35:35.689927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.689935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.689941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.689947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30520 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.689953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.689961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.689966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30528 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.689979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.689986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.689991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30536 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30544 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30552 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30560 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30568 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30576 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30584 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30592 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30600 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30608 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30616 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30624 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30632 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30640 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30648 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30656 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30664 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30672 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30680 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30688 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30696 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30704 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.319 [2024-07-15 11:35:35.690554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.319 [2024-07-15 11:35:35.690560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30712 len:8 PRP1 0x0 PRP2 0x0 00:25:03.319 [2024-07-15 11:35:35.690566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.319 [2024-07-15 11:35:35.690573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.320 [2024-07-15 11:35:35.690579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.320 [2024-07-15 11:35:35.690585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30720 len:8 PRP1 0x0 PRP2 0x0 00:25:03.320 [2024-07-15 11:35:35.690591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.320 [2024-07-15 11:35:35.690604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.320 [2024-07-15 11:35:35.690610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30728 len:8 PRP1 0x0 PRP2 0x0 00:25:03.320 [2024-07-15 11:35:35.690617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690658] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe18380 was disconnected and freed. reset controller. 00:25:03.320 [2024-07-15 11:35:35.690667] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:03.320 [2024-07-15 11:35:35.690688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:35.690696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:35.690711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:35.690725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:35.690738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:35.690745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.320 [2024-07-15 11:35:35.690773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d540 (9): Bad file descriptor 00:25:03.320 [2024-07-15 11:35:35.693590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.320 [2024-07-15 11:35:35.762034] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.320 [2024-07-15 11:35:40.083042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:40.083086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:40.083103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:40.083117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.320 [2024-07-15 11:35:40.083131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d540 is same with the state(5) to be set 00:25:03.320 [2024-07-15 11:35:40.083180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.320 [2024-07-15 11:35:40.083575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.320 [2024-07-15 11:35:40.083583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.083936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.083951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.083966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.083980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.083989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.083996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.084011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.084026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.321 [2024-07-15 11:35:40.084041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.321 [2024-07-15 11:35:40.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.321 [2024-07-15 11:35:40.084207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.322 [2024-07-15 11:35:40.084526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.322 [2024-07-15 11:35:40.084858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.322 [2024-07-15 11:35:40.084865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.323 [2024-07-15 11:35:40.084880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.323 [2024-07-15 11:35:40.084895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.084992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.084999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.323 [2024-07-15 11:35:40.085132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.323 [2024-07-15 11:35:40.085158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.323 [2024-07-15 11:35:40.085167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42296 len:8 PRP1 0x0 PRP2 0x0 00:25:03.323 [2024-07-15 11:35:40.085173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.323 [2024-07-15 11:35:40.085215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc48d20 was disconnected and freed. reset controller. 00:25:03.323 [2024-07-15 11:35:40.085230] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:03.323 [2024-07-15 11:35:40.085238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.323 [2024-07-15 11:35:40.088049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.323 [2024-07-15 11:35:40.088078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d540 (9): Bad file descriptor 00:25:03.323 [2024-07-15 11:35:40.160828] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.323 00:25:03.323 Latency(us) 00:25:03.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:03.323 Verification LBA range: start 0x0 length 0x4000 00:25:03.323 NVMe0n1 : 15.01 10839.03 42.34 642.99 0.00 11125.50 587.69 16754.42 00:25:03.323 =================================================================================================================== 00:25:03.323 Total : 10839.03 42.34 642.99 0.00 11125.50 587.69 16754.42 00:25:03.323 Received shutdown signal, test time was about 15.000000 seconds 00:25:03.323 00:25:03.323 Latency(us) 00:25:03.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.323 =================================================================================================================== 00:25:03.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=704107 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 704107 /var/tmp/bdevperf.sock 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 704107 ']' 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.323 11:35:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.914 11:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.914 11:35:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:03.914 11:35:47 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.914 [2024-07-15 11:35:47.374345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.914 11:35:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:04.172 [2024-07-15 11:35:47.554830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:04.172 11:35:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.430 NVMe0n1 00:25:04.430 11:35:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.688 00:25:04.688 11:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.946 00:25:04.946 11:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.946 11:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:05.205 11:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.205 11:35:48 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:08.487 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.488 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:08.488 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.488 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=705036 00:25:08.488 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 705036 00:25:09.865 0 00:25:09.865 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:09.865 [2024-07-15 11:35:46.421055] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:09.865 [2024-07-15 11:35:46.421105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704107 ] 00:25:09.865 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.866 [2024-07-15 11:35:46.487157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.866 [2024-07-15 11:35:46.556340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.866 [2024-07-15 11:35:48.735735] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:09.866 [2024-07-15 11:35:48.735783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.866 [2024-07-15 11:35:48.735795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.866 [2024-07-15 11:35:48.735803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.866 [2024-07-15 11:35:48.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.866 [2024-07-15 11:35:48.735818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.866 [2024-07-15 11:35:48.735824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.866 [2024-07-15 11:35:48.735831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.866 [2024-07-15 11:35:48.735838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.866 [2024-07-15 11:35:48.735844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.866 [2024-07-15 11:35:48.735873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238a540 (9): Bad file descriptor 00:25:09.866 [2024-07-15 11:35:48.735887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.866 [2024-07-15 11:35:48.779430] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:09.866 Running I/O for 1 seconds... 00:25:09.866 00:25:09.866 Latency(us) 00:25:09.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.866 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.866 Verification LBA range: start 0x0 length 0x4000 00:25:09.866 NVMe0n1 : 1.00 10890.37 42.54 0.00 0.00 11711.42 2037.31 9687.93 00:25:09.866 =================================================================================================================== 00:25:09.866 Total : 10890.37 42.54 0.00 0.00 11711.42 2037.31 9687.93 00:25:09.866 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.866 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:09.866 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.866 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.866 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:10.123 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.381 11:35:53 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 704107 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 704107 ']' 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 704107 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.669 11:35:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 704107 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 704107' 00:25:13.669 killing process with pid 704107 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 704107 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 704107 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:13.669 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.930 rmmod nvme_tcp 00:25:13.930 rmmod nvme_fabrics 00:25:13.930 rmmod nvme_keyring 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 701067 ']' 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 701067 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 701067 ']' 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 701067 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701067 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701067' 00:25:13.930 killing process with pid 701067 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 701067 00:25:13.930 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 701067 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.224 11:35:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.762 11:35:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.762 00:25:16.762 real 0m38.538s 00:25:16.762 user 2m3.289s 00:25:16.762 sys 0m7.709s 00:25:16.762 11:35:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.762 11:35:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.762 ************************************ 00:25:16.762 END TEST nvmf_failover 00:25:16.762 ************************************ 00:25:16.762 11:35:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:16.762 11:35:59 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:16.762 11:35:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:16.762 11:35:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.762 11:35:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.762 ************************************ 00:25:16.762 START TEST nvmf_host_discovery 00:25:16.762 ************************************ 00:25:16.762 11:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:16.762 * Looking for test storage... 00:25:16.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.763 11:35:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:22.038 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:22.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:22.038 Found net devices under 0000:86:00.0: cvl_0_0 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:22.038 Found net devices under 0000:86:00.1: cvl_0_1 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.038 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:25:22.297 00:25:22.297 --- 10.0.0.2 ping statistics --- 00:25:22.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.297 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:22.297 00:25:22.297 --- 10.0.0.1 ping statistics --- 00:25:22.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.297 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=709595 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 709595 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 709595 ']' 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.297 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.297 [2024-07-15 11:36:05.767546] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:22.298 [2024-07-15 11:36:05.767590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.298 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.298 [2024-07-15 11:36:05.823278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.556 [2024-07-15 11:36:05.901395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.556 [2024-07-15 11:36:05.901430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.556 [2024-07-15 11:36:05.901437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.556 [2024-07-15 11:36:05.901443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.556 [2024-07-15 11:36:05.901448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.556 [2024-07-15 11:36:05.901466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 [2024-07-15 11:36:06.617167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 [2024-07-15 11:36:06.629327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 null0 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 null1 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=709641 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 709641 /tmp/host.sock 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 709641 ']' 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:23.123 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.123 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.123 [2024-07-15 11:36:06.704916] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:23.123 [2024-07-15 11:36:06.704959] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709641 ] 00:25:23.382 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.382 [2024-07-15 11:36:06.772483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.382 [2024-07-15 11:36:06.848396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.948 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.230 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.231 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.489 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.489 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:24.489 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:24.489 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.489 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.489 [2024-07-15 11:36:07.864581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.490 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:24.490 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:25.056 [2024-07-15 11:36:08.588359] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:25.056 [2024-07-15 11:36:08.588380] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:25.056 [2024-07-15 11:36:08.588393] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.313 [2024-07-15 11:36:08.674654] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:25.313 [2024-07-15 11:36:08.779617] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:25.313 [2024-07-15 11:36:08.779636] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.571 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.829 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.830 [2024-07-15 11:36:09.380667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.830 [2024-07-15 11:36:09.381637] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:25.830 [2024-07-15 11:36:09.381658] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.830 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.105 [2024-07-15 11:36:09.468908] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:26.105 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:26.105 [2024-07-15 11:36:09.572494] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:26.105 [2024-07-15 11:36:09.572511] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:26.105 [2024-07-15 11:36:09.572517] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.038 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.039 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.297 [2024-07-15 11:36:10.644733] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:27.297 [2024-07-15 11:36:10.644761] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:27.297 [2024-07-15 11:36:10.652147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.297 [2024-07-15 11:36:10.652168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.297 [2024-07-15 11:36:10.652177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.297 [2024-07-15 11:36:10.652184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.297 [2024-07-15 11:36:10.652196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.297 [2024-07-15 11:36:10.652203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.297 [2024-07-15 11:36:10.652210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.297 [2024-07-15 11:36:10.652216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.297 [2024-07-15 11:36:10.652223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.297 [2024-07-15 11:36:10.662159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.297 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.297 [2024-07-15 11:36:10.672196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.297 [2024-07-15 11:36:10.672418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.297 [2024-07-15 11:36:10.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.297 [2024-07-15 11:36:10.672443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.297 [2024-07-15 11:36:10.672454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.297 [2024-07-15 11:36:10.672465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.297 [2024-07-15 11:36:10.672472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.297 [2024-07-15 11:36:10.672480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.297 [2024-07-15 11:36:10.672492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.297 [2024-07-15 11:36:10.682252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.297 [2024-07-15 11:36:10.682414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.297 [2024-07-15 11:36:10.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.297 [2024-07-15 11:36:10.682435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.297 [2024-07-15 11:36:10.682445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.297 [2024-07-15 11:36:10.682455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.297 [2024-07-15 11:36:10.682462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.297 [2024-07-15 11:36:10.682469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.297 [2024-07-15 11:36:10.682479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.297 [2024-07-15 11:36:10.692305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.298 [2024-07-15 11:36:10.692565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.298 [2024-07-15 11:36:10.692580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.298 [2024-07-15 11:36:10.692587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.298 [2024-07-15 11:36:10.692599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.298 [2024-07-15 11:36:10.692610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.298 [2024-07-15 11:36:10.692617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.298 [2024-07-15 11:36:10.692623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.298 [2024-07-15 11:36:10.692633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.298 [2024-07-15 11:36:10.702357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.298 [2024-07-15 11:36:10.702589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.298 [2024-07-15 11:36:10.702605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.298 [2024-07-15 11:36:10.702612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.298 [2024-07-15 11:36:10.702622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.298 [2024-07-15 11:36:10.702633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.298 [2024-07-15 11:36:10.702639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.298 [2024-07-15 11:36:10.702646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.298 [2024-07-15 11:36:10.702655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.298 [2024-07-15 11:36:10.712407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.298 [2024-07-15 11:36:10.712654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.298 [2024-07-15 11:36:10.712669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.298 [2024-07-15 11:36:10.712677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.298 [2024-07-15 11:36:10.712687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.298 [2024-07-15 11:36:10.712697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.298 [2024-07-15 11:36:10.712703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.298 [2024-07-15 11:36:10.712709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.298 [2024-07-15 11:36:10.712719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.298 [2024-07-15 11:36:10.722460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.298 [2024-07-15 11:36:10.722593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.298 [2024-07-15 11:36:10.722607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133ef10 with addr=10.0.0.2, port=4420 00:25:27.298 [2024-07-15 11:36:10.722613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ef10 is same with the state(5) to be set 00:25:27.298 [2024-07-15 11:36:10.722624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ef10 (9): Bad file descriptor 00:25:27.298 [2024-07-15 11:36:10.722633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.298 [2024-07-15 11:36:10.722639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.298 [2024-07-15 11:36:10.722647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.298 [2024-07-15 11:36:10.722656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.298 [2024-07-15 11:36:10.732019] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:27.298 [2024-07-15 11:36:10.732035] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.298 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.558 11:36:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.558 11:36:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.494 [2024-07-15 11:36:12.076375] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:28.494 [2024-07-15 11:36:12.076394] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:28.494 [2024-07-15 11:36:12.076405] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.752 [2024-07-15 11:36:12.164670] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:29.011 [2024-07-15 11:36:12.473763] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:29.011 [2024-07-15 11:36:12.473795] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.011 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.011 request: 00:25:29.011 { 00:25:29.011 "name": "nvme", 00:25:29.011 "trtype": "tcp", 00:25:29.011 "traddr": "10.0.0.2", 00:25:29.012 "adrfam": "ipv4", 00:25:29.012 "trsvcid": "8009", 00:25:29.012 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:29.012 "wait_for_attach": true, 00:25:29.012 "method": "bdev_nvme_start_discovery", 00:25:29.012 "req_id": 1 00:25:29.012 } 00:25:29.012 Got JSON-RPC error response 00:25:29.012 response: 00:25:29.012 { 00:25:29.012 "code": -17, 00:25:29.012 "message": "File exists" 00:25:29.012 } 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.012 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.271 request: 00:25:29.271 { 00:25:29.271 "name": "nvme_second", 00:25:29.271 "trtype": "tcp", 00:25:29.271 "traddr": "10.0.0.2", 00:25:29.271 "adrfam": "ipv4", 00:25:29.271 "trsvcid": "8009", 00:25:29.271 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:29.271 "wait_for_attach": true, 00:25:29.271 "method": "bdev_nvme_start_discovery", 00:25:29.271 "req_id": 1 00:25:29.271 } 00:25:29.272 Got JSON-RPC error response 00:25:29.272 response: 00:25:29.272 { 00:25:29.272 "code": -17, 00:25:29.272 "message": "File exists" 00:25:29.272 } 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.272 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.206 [2024-07-15 11:36:13.721290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.206 [2024-07-15 11:36:13.721320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137ba00 with addr=10.0.0.2, port=8010 00:25:30.206 [2024-07-15 11:36:13.721335] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:30.206 [2024-07-15 11:36:13.721342] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:30.206 [2024-07-15 11:36:13.721348] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:31.142 [2024-07-15 11:36:14.723728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.142 [2024-07-15 11:36:14.723753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137ba00 with addr=10.0.0.2, port=8010 00:25:31.142 [2024-07-15 11:36:14.723763] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:31.142 [2024-07-15 11:36:14.723769] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:31.142 [2024-07-15 11:36:14.723775] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:32.518 [2024-07-15 11:36:15.725878] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:32.518 request: 00:25:32.518 { 00:25:32.518 "name": "nvme_second", 00:25:32.518 "trtype": "tcp", 00:25:32.518 "traddr": "10.0.0.2", 00:25:32.518 "adrfam": "ipv4", 00:25:32.518 "trsvcid": "8010", 00:25:32.518 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:32.518 "wait_for_attach": false, 00:25:32.518 "attach_timeout_ms": 3000, 00:25:32.518 "method": "bdev_nvme_start_discovery", 00:25:32.518 "req_id": 1 00:25:32.518 } 00:25:32.518 Got JSON-RPC error response 00:25:32.518 response: 00:25:32.518 { 00:25:32.518 "code": -110, 00:25:32.518 "message": "Connection timed out" 00:25:32.518 } 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 709641 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.518 rmmod nvme_tcp 00:25:32.518 rmmod nvme_fabrics 00:25:32.518 rmmod nvme_keyring 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:32.518 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 709595 ']' 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 709595 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 709595 ']' 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 709595 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 709595 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 709595' 00:25:32.519 killing process with pid 709595 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 709595 00:25:32.519 11:36:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 709595 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.519 11:36:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:35.053 00:25:35.053 real 0m18.300s 00:25:35.053 user 0m22.787s 00:25:35.053 sys 0m5.764s 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.053 ************************************ 00:25:35.053 END TEST nvmf_host_discovery 00:25:35.053 ************************************ 00:25:35.053 11:36:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:35.053 11:36:18 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:35.053 11:36:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:35.053 11:36:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.053 11:36:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.053 ************************************ 00:25:35.053 START TEST nvmf_host_multipath_status 00:25:35.053 ************************************ 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:35.053 * Looking for test storage... 00:25:35.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.053 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.054 11:36:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.389 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:40.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:40.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:40.390 Found net devices under 0000:86:00.0: cvl_0_0 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:40.390 Found net devices under 0000:86:00.1: cvl_0_1 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:25:40.390 00:25:40.390 --- 10.0.0.2 ping statistics --- 00:25:40.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.390 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:25:40.390 00:25:40.390 --- 10.0.0.1 ping statistics --- 00:25:40.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.390 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.390 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.648 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=715175 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 715175 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 715175 ']' 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.648 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.649 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 [2024-07-15 11:36:24.055626] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:25:40.649 [2024-07-15 11:36:24.055673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.649 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.649 [2024-07-15 11:36:24.127522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:40.649 [2024-07-15 11:36:24.213540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.649 [2024-07-15 11:36:24.213576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.649 [2024-07-15 11:36:24.213582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.649 [2024-07-15 11:36:24.213588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.649 [2024-07-15 11:36:24.213594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.649 [2024-07-15 11:36:24.213643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.649 [2024-07-15 11:36:24.213644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=715175 00:25:41.583 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:41.583 [2024-07-15 11:36:25.062524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.583 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:41.841 Malloc0 00:25:41.841 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:42.099 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.099 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.358 [2024-07-15 11:36:25.835914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.358 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.617 [2024-07-15 11:36:26.016400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=715561 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 715561 /var/tmp/bdevperf.sock 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 715561 ']' 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.617 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:43.553 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.553 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:43.553 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:43.553 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:43.812 Nvme0n1 00:25:43.812 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:44.378 Nvme0n1 00:25:44.378 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:44.378 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:46.289 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:46.289 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:46.550 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.550 11:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.924 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.182 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.441 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.441 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.441 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.441 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:48.699 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.958 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.216 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:50.184 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:50.184 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:50.184 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.184 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.443 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.443 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:50.443 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.443 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.701 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.961 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.961 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.961 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.961 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.220 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.220 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.220 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.220 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.478 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.478 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:51.478 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.737 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:51.737 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.112 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.370 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.370 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.370 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.370 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.629 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.629 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.629 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.629 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:53.918 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.182 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:54.441 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:55.378 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:55.378 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.379 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.379 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.638 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.897 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.897 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.897 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.897 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.897 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.156 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.156 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.156 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.156 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:56.416 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:56.675 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:56.934 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:57.871 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:57.871 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.871 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.871 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.130 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.130 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.130 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.130 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.388 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.647 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.647 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:58.648 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.648 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:58.907 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:59.167 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:59.425 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:00.361 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:00.361 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:00.361 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.361 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.620 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.620 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.620 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.620 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.879 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.139 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.139 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:01.139 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.139 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.398 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:01.657 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:01.657 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:01.918 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.177 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:03.115 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:03.115 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.115 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.115 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.374 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.633 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.633 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.633 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.634 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.893 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.893 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.893 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.893 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:04.151 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.410 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.669 11:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:05.604 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:05.604 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.604 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.604 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.863 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.863 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.863 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.863 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.122 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.123 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.382 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.382 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.382 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.382 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.641 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.641 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.641 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.641 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.899 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.899 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:06.899 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.899 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:07.158 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:08.094 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:08.094 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.094 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.094 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.351 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.351 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.351 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.351 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.610 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.610 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.610 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.610 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.870 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.181 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.181 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.181 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.181 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.439 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.439 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:09.439 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.439 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:09.697 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:10.631 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:10.631 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.631 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.631 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.889 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.889 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.889 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.889 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.147 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.147 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.147 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.147 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.407 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.666 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.666 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:11.666 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.666 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 715561 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 715561 ']' 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 715561 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 715561 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 715561' 00:26:11.925 killing process with pid 715561 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 715561 00:26:11.925 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 715561 00:26:11.925 Connection closed with partial response: 00:26:11.925 00:26:11.925 00:26:12.189 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 715561 00:26:12.189 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:12.189 [2024-07-15 11:36:26.086157] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:12.189 [2024-07-15 11:36:26.086204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715561 ] 00:26:12.189 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.189 [2024-07-15 11:36:26.150836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.189 [2024-07-15 11:36:26.226383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.189 Running I/O for 90 seconds... 00:26:12.189 [2024-07-15 11:36:40.156966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.189 [2024-07-15 11:36:40.157357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.189 [2024-07-15 11:36:40.157371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.157489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.157496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.190 [2024-07-15 11:36:40.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:12.190 [2024-07-15 11:36:40.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.190 [2024-07-15 11:36:40.158772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.158980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.191 [2024-07-15 11:36:40.159736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:12.191 [2024-07-15 11:36:40.159852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.191 [2024-07-15 11:36:40.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.159882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.159905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.159929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.159953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.159977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.159994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:40.160407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.192 [2024-07-15 11:36:40.160414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.162755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.162794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.162834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.162842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.162854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.162861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.162874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.162880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:12.192 [2024-07-15 11:36:53.164645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.192 [2024-07-15 11:36:53.164653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.164990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:12.193 [2024-07-15 11:36:53.165433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.193 [2024-07-15 11:36:53.165442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:12.193 Received shutdown signal, test time was about 27.546322 seconds 00:26:12.193 00:26:12.193 Latency(us) 00:26:12.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.193 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:12.193 Verification LBA range: start 0x0 length 0x4000 00:26:12.193 Nvme0n1 : 27.55 10336.18 40.38 0.00 0.00 12364.10 1146.88 3019898.88 00:26:12.193 =================================================================================================================== 00:26:12.193 Total : 10336.18 40.38 0.00 0.00 12364.10 1146.88 3019898.88 00:26:12.193 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.453 rmmod nvme_tcp 00:26:12.453 rmmod nvme_fabrics 00:26:12.453 rmmod nvme_keyring 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 715175 ']' 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 715175 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 715175 ']' 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 715175 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 715175 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 715175' 00:26:12.453 killing process with pid 715175 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 715175 00:26:12.453 11:36:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 715175 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.712 11:36:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.619 11:36:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.619 00:26:14.619 real 0m39.925s 00:26:14.619 user 1m47.977s 00:26:14.619 sys 0m10.690s 00:26:14.619 11:36:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.619 11:36:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.619 ************************************ 00:26:14.619 END TEST nvmf_host_multipath_status 00:26:14.619 ************************************ 00:26:14.619 11:36:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:14.619 11:36:58 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:14.619 11:36:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.619 11:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.620 11:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.880 ************************************ 00:26:14.880 START TEST nvmf_discovery_remove_ifc 00:26:14.880 ************************************ 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:14.880 * Looking for test storage... 00:26:14.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.880 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:20.155 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:20.155 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:20.155 Found net devices under 0000:86:00.0: cvl_0_0 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:20.155 Found net devices under 0000:86:00.1: cvl_0_1 00:26:20.155 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:20.156 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:20.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:26:20.415 00:26:20.415 --- 10.0.0.2 ping statistics --- 00:26:20.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.415 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:26:20.415 00:26:20.415 --- 10.0.0.1 ping statistics --- 00:26:20.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.415 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:20.415 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=723967 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 723967 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 723967 ']' 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.674 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.674 [2024-07-15 11:37:04.065086] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:20.674 [2024-07-15 11:37:04.065127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.674 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.674 [2024-07-15 11:37:04.133543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.674 [2024-07-15 11:37:04.209574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.674 [2024-07-15 11:37:04.209609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.674 [2024-07-15 11:37:04.209616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.674 [2024-07-15 11:37:04.209626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.674 [2024-07-15 11:37:04.209631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.674 [2024-07-15 11:37:04.209648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.611 [2024-07-15 11:37:04.925556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.611 [2024-07-15 11:37:04.933690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:21.611 null0 00:26:21.611 [2024-07-15 11:37:04.965691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=724116 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 724116 /tmp/host.sock 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:21.611 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 724116 ']' 00:26:21.612 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:21.612 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.612 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:21.612 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:21.612 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.612 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.612 [2024-07-15 11:37:05.032722] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:26:21.612 [2024-07-15 11:37:05.032762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724116 ] 00:26:21.612 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.612 [2024-07-15 11:37:05.100617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.612 [2024-07-15 11:37:05.184878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.550 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.487 [2024-07-15 11:37:06.983712] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.487 [2024-07-15 11:37:06.983734] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.487 [2024-07-15 11:37:06.983744] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.745 [2024-07-15 11:37:07.110138] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:23.745 [2024-07-15 11:37:07.327381] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:23.745 [2024-07-15 11:37:07.327428] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:23.745 [2024-07-15 11:37:07.327449] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:23.745 [2024-07-15 11:37:07.327461] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.745 [2024-07-15 11:37:07.327479] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.745 [2024-07-15 11:37:07.333062] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19e2e30 was disconnected and freed. delete nvme_qpair. 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.745 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.004 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.005 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.382 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.319 11:37:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.287 11:37:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.220 11:37:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.152 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.410 [2024-07-15 11:37:12.768796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:29.410 [2024-07-15 11:37:12.768836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.410 [2024-07-15 11:37:12.768846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.410 [2024-07-15 11:37:12.768856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.410 [2024-07-15 11:37:12.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.410 [2024-07-15 11:37:12.768869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.410 [2024-07-15 11:37:12.768876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.410 [2024-07-15 11:37:12.768882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.410 [2024-07-15 11:37:12.768889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.410 [2024-07-15 11:37:12.768896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.410 [2024-07-15 11:37:12.768902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.410 [2024-07-15 11:37:12.768908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a9690 is same with the state(5) to be set 00:26:29.410 [2024-07-15 11:37:12.778816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a9690 (9): Bad file descriptor 00:26:29.410 [2024-07-15 11:37:12.788855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:29.410 11:37:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.345 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.345 [2024-07-15 11:37:13.826286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:30.345 [2024-07-15 11:37:13.826360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a9690 with addr=10.0.0.2, port=4420 00:26:30.345 [2024-07-15 11:37:13.826389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a9690 is same with the state(5) to be set 00:26:30.345 [2024-07-15 11:37:13.826439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a9690 (9): Bad file descriptor 00:26:30.345 [2024-07-15 11:37:13.827367] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.345 [2024-07-15 11:37:13.827416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:30.345 [2024-07-15 11:37:13.827441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:30.345 [2024-07-15 11:37:13.827466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:30.346 [2024-07-15 11:37:13.827506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.346 [2024-07-15 11:37:13.827530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:30.346 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.346 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.346 11:37:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.281 [2024-07-15 11:37:14.830030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.281 [2024-07-15 11:37:14.830053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.281 [2024-07-15 11:37:14.830061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.281 [2024-07-15 11:37:14.830067] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:31.281 [2024-07-15 11:37:14.830078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.281 [2024-07-15 11:37:14.830095] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:31.281 [2024-07-15 11:37:14.830114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.281 [2024-07-15 11:37:14.830122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.281 [2024-07-15 11:37:14.830131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.281 [2024-07-15 11:37:14.830137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.281 [2024-07-15 11:37:14.830145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.281 [2024-07-15 11:37:14.830152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.281 [2024-07-15 11:37:14.830159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.281 [2024-07-15 11:37:14.830170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.281 [2024-07-15 11:37:14.830177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.281 [2024-07-15 11:37:14.830184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.281 [2024-07-15 11:37:14.830192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:31.281 [2024-07-15 11:37:14.830730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a8a80 (9): Bad file descriptor 00:26:31.281 [2024-07-15 11:37:14.831740] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:31.281 [2024-07-15 11:37:14.831751] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.281 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.539 11:37:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.539 11:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:31.539 11:37:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.473 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.732 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:32.732 11:37:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.298 [2024-07-15 11:37:16.889764] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:33.298 [2024-07-15 11:37:16.889781] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:33.556 [2024-07-15 11:37:16.889793] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.556 [2024-07-15 11:37:17.016180] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:33.556 11:37:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.814 [2024-07-15 11:37:17.232877] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:33.814 [2024-07-15 11:37:17.232914] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:33.814 [2024-07-15 11:37:17.232932] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:33.814 [2024-07-15 11:37:17.232946] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:33.814 [2024-07-15 11:37:17.232952] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.814 [2024-07-15 11:37:17.238411] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19bf8d0 was disconnected and freed. delete nvme_qpair. 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 724116 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 724116 ']' 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 724116 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724116 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724116' 00:26:34.750 killing process with pid 724116 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 724116 00:26:34.750 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 724116 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.010 rmmod nvme_tcp 00:26:35.010 rmmod nvme_fabrics 00:26:35.010 rmmod nvme_keyring 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 723967 ']' 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 723967 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 723967 ']' 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 723967 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723967 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723967' 00:26:35.010 killing process with pid 723967 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 723967 00:26:35.010 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 723967 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.269 11:37:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.174 11:37:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.174 00:26:37.174 real 0m22.542s 00:26:37.174 user 0m29.143s 00:26:37.174 sys 0m5.603s 00:26:37.174 11:37:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.174 11:37:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.174 ************************************ 00:26:37.174 END TEST nvmf_discovery_remove_ifc 00:26:37.174 ************************************ 00:26:37.434 11:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:37.434 11:37:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:37.434 11:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:37.434 11:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.434 11:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.434 ************************************ 00:26:37.434 START TEST nvmf_identify_kernel_target 00:26:37.434 ************************************ 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:37.434 * Looking for test storage... 00:26:37.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.434 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.435 11:37:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.002 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.002 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.002 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.003 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.003 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:26:44.003 00:26:44.003 --- 10.0.0.2 ping statistics --- 00:26:44.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.003 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:44.003 00:26:44.003 --- 10.0.0.1 ping statistics --- 00:26:44.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.003 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.003 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:44.004 11:37:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:45.905 Waiting for block devices as requested 00:26:45.905 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:46.163 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:46.163 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:46.163 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:46.163 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:46.421 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:46.421 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:46.421 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:46.702 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:46.702 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:46.702 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:46.977 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:46.977 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:46.977 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:46.977 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:47.236 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:47.236 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:47.237 No valid GPT data, bailing 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.237 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:47.498 00:26:47.498 Discovery Log Number of Records 2, Generation counter 2 00:26:47.498 =====Discovery Log Entry 0====== 00:26:47.498 trtype: tcp 00:26:47.498 adrfam: ipv4 00:26:47.498 subtype: current discovery subsystem 00:26:47.498 treq: not specified, sq flow control disable supported 00:26:47.498 portid: 1 00:26:47.498 trsvcid: 4420 00:26:47.498 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:47.498 traddr: 10.0.0.1 00:26:47.498 eflags: none 00:26:47.498 sectype: none 00:26:47.498 =====Discovery Log Entry 1====== 00:26:47.498 trtype: tcp 00:26:47.498 adrfam: ipv4 00:26:47.498 subtype: nvme subsystem 00:26:47.498 treq: not specified, sq flow control disable supported 00:26:47.498 portid: 1 00:26:47.498 trsvcid: 4420 00:26:47.498 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:47.498 traddr: 10.0.0.1 00:26:47.498 eflags: none 00:26:47.498 sectype: none 00:26:47.498 11:37:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:47.498 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:47.498 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.498 ===================================================== 00:26:47.498 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:47.498 ===================================================== 00:26:47.498 Controller Capabilities/Features 00:26:47.498 ================================ 00:26:47.498 Vendor ID: 0000 00:26:47.498 Subsystem Vendor ID: 0000 00:26:47.498 Serial Number: 048a739f3cb184066b58 00:26:47.498 Model Number: Linux 00:26:47.498 Firmware Version: 6.7.0-68 00:26:47.498 Recommended Arb Burst: 0 00:26:47.498 IEEE OUI Identifier: 00 00 00 00:26:47.498 Multi-path I/O 00:26:47.498 May have multiple subsystem ports: No 00:26:47.498 May have multiple controllers: No 00:26:47.498 Associated with SR-IOV VF: No 00:26:47.498 Max Data Transfer Size: Unlimited 00:26:47.498 Max Number of Namespaces: 0 00:26:47.498 Max Number of I/O Queues: 1024 00:26:47.498 NVMe Specification Version (VS): 1.3 00:26:47.498 NVMe Specification Version (Identify): 1.3 00:26:47.498 Maximum Queue Entries: 1024 00:26:47.498 Contiguous Queues Required: No 00:26:47.498 Arbitration Mechanisms Supported 00:26:47.498 Weighted Round Robin: Not Supported 00:26:47.498 Vendor Specific: Not Supported 00:26:47.498 Reset Timeout: 7500 ms 00:26:47.498 Doorbell Stride: 4 bytes 00:26:47.498 NVM Subsystem Reset: Not Supported 00:26:47.498 Command Sets Supported 00:26:47.498 NVM Command Set: Supported 00:26:47.498 Boot Partition: Not Supported 00:26:47.498 Memory Page Size Minimum: 4096 bytes 00:26:47.498 Memory Page Size Maximum: 4096 bytes 00:26:47.498 Persistent Memory Region: Not Supported 00:26:47.498 Optional Asynchronous Events Supported 00:26:47.498 Namespace Attribute Notices: Not Supported 00:26:47.498 Firmware Activation Notices: Not Supported 00:26:47.498 ANA Change Notices: Not Supported 00:26:47.498 PLE Aggregate Log Change Notices: Not Supported 00:26:47.498 LBA Status Info Alert Notices: Not Supported 00:26:47.498 EGE Aggregate Log Change Notices: Not Supported 00:26:47.498 Normal NVM Subsystem Shutdown event: Not Supported 00:26:47.498 Zone Descriptor Change Notices: Not Supported 00:26:47.498 Discovery Log Change Notices: Supported 00:26:47.498 Controller Attributes 00:26:47.498 128-bit Host Identifier: Not Supported 00:26:47.498 Non-Operational Permissive Mode: Not Supported 00:26:47.498 NVM Sets: Not Supported 00:26:47.498 Read Recovery Levels: Not Supported 00:26:47.498 Endurance Groups: Not Supported 00:26:47.498 Predictable Latency Mode: Not Supported 00:26:47.498 Traffic Based Keep ALive: Not Supported 00:26:47.498 Namespace Granularity: Not Supported 00:26:47.498 SQ Associations: Not Supported 00:26:47.498 UUID List: Not Supported 00:26:47.498 Multi-Domain Subsystem: Not Supported 00:26:47.498 Fixed Capacity Management: Not Supported 00:26:47.498 Variable Capacity Management: Not Supported 00:26:47.498 Delete Endurance Group: Not Supported 00:26:47.498 Delete NVM Set: Not Supported 00:26:47.498 Extended LBA Formats Supported: Not Supported 00:26:47.498 Flexible Data Placement Supported: Not Supported 00:26:47.498 00:26:47.498 Controller Memory Buffer Support 00:26:47.498 ================================ 00:26:47.498 Supported: No 00:26:47.498 00:26:47.498 Persistent Memory Region Support 00:26:47.498 ================================ 00:26:47.498 Supported: No 00:26:47.498 00:26:47.498 Admin Command Set Attributes 00:26:47.498 ============================ 00:26:47.498 Security Send/Receive: Not Supported 00:26:47.498 Format NVM: Not Supported 00:26:47.498 Firmware Activate/Download: Not Supported 00:26:47.498 Namespace Management: Not Supported 00:26:47.498 Device Self-Test: Not Supported 00:26:47.498 Directives: Not Supported 00:26:47.498 NVMe-MI: Not Supported 00:26:47.498 Virtualization Management: Not Supported 00:26:47.498 Doorbell Buffer Config: Not Supported 00:26:47.498 Get LBA Status Capability: Not Supported 00:26:47.498 Command & Feature Lockdown Capability: Not Supported 00:26:47.498 Abort Command Limit: 1 00:26:47.498 Async Event Request Limit: 1 00:26:47.498 Number of Firmware Slots: N/A 00:26:47.498 Firmware Slot 1 Read-Only: N/A 00:26:47.498 Firmware Activation Without Reset: N/A 00:26:47.498 Multiple Update Detection Support: N/A 00:26:47.498 Firmware Update Granularity: No Information Provided 00:26:47.498 Per-Namespace SMART Log: No 00:26:47.498 Asymmetric Namespace Access Log Page: Not Supported 00:26:47.498 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:47.498 Command Effects Log Page: Not Supported 00:26:47.498 Get Log Page Extended Data: Supported 00:26:47.498 Telemetry Log Pages: Not Supported 00:26:47.498 Persistent Event Log Pages: Not Supported 00:26:47.498 Supported Log Pages Log Page: May Support 00:26:47.498 Commands Supported & Effects Log Page: Not Supported 00:26:47.498 Feature Identifiers & Effects Log Page:May Support 00:26:47.498 NVMe-MI Commands & Effects Log Page: May Support 00:26:47.498 Data Area 4 for Telemetry Log: Not Supported 00:26:47.498 Error Log Page Entries Supported: 1 00:26:47.498 Keep Alive: Not Supported 00:26:47.498 00:26:47.498 NVM Command Set Attributes 00:26:47.498 ========================== 00:26:47.498 Submission Queue Entry Size 00:26:47.498 Max: 1 00:26:47.498 Min: 1 00:26:47.498 Completion Queue Entry Size 00:26:47.498 Max: 1 00:26:47.498 Min: 1 00:26:47.498 Number of Namespaces: 0 00:26:47.498 Compare Command: Not Supported 00:26:47.498 Write Uncorrectable Command: Not Supported 00:26:47.498 Dataset Management Command: Not Supported 00:26:47.498 Write Zeroes Command: Not Supported 00:26:47.498 Set Features Save Field: Not Supported 00:26:47.498 Reservations: Not Supported 00:26:47.498 Timestamp: Not Supported 00:26:47.498 Copy: Not Supported 00:26:47.498 Volatile Write Cache: Not Present 00:26:47.498 Atomic Write Unit (Normal): 1 00:26:47.498 Atomic Write Unit (PFail): 1 00:26:47.498 Atomic Compare & Write Unit: 1 00:26:47.498 Fused Compare & Write: Not Supported 00:26:47.498 Scatter-Gather List 00:26:47.498 SGL Command Set: Supported 00:26:47.498 SGL Keyed: Not Supported 00:26:47.499 SGL Bit Bucket Descriptor: Not Supported 00:26:47.499 SGL Metadata Pointer: Not Supported 00:26:47.499 Oversized SGL: Not Supported 00:26:47.499 SGL Metadata Address: Not Supported 00:26:47.499 SGL Offset: Supported 00:26:47.499 Transport SGL Data Block: Not Supported 00:26:47.499 Replay Protected Memory Block: Not Supported 00:26:47.499 00:26:47.499 Firmware Slot Information 00:26:47.499 ========================= 00:26:47.499 Active slot: 0 00:26:47.499 00:26:47.499 00:26:47.499 Error Log 00:26:47.499 ========= 00:26:47.499 00:26:47.499 Active Namespaces 00:26:47.499 ================= 00:26:47.499 Discovery Log Page 00:26:47.499 ================== 00:26:47.499 Generation Counter: 2 00:26:47.499 Number of Records: 2 00:26:47.499 Record Format: 0 00:26:47.499 00:26:47.499 Discovery Log Entry 0 00:26:47.499 ---------------------- 00:26:47.499 Transport Type: 3 (TCP) 00:26:47.499 Address Family: 1 (IPv4) 00:26:47.499 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:47.499 Entry Flags: 00:26:47.499 Duplicate Returned Information: 0 00:26:47.499 Explicit Persistent Connection Support for Discovery: 0 00:26:47.499 Transport Requirements: 00:26:47.499 Secure Channel: Not Specified 00:26:47.499 Port ID: 1 (0x0001) 00:26:47.499 Controller ID: 65535 (0xffff) 00:26:47.499 Admin Max SQ Size: 32 00:26:47.499 Transport Service Identifier: 4420 00:26:47.499 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:47.499 Transport Address: 10.0.0.1 00:26:47.499 Discovery Log Entry 1 00:26:47.499 ---------------------- 00:26:47.499 Transport Type: 3 (TCP) 00:26:47.499 Address Family: 1 (IPv4) 00:26:47.499 Subsystem Type: 2 (NVM Subsystem) 00:26:47.499 Entry Flags: 00:26:47.499 Duplicate Returned Information: 0 00:26:47.499 Explicit Persistent Connection Support for Discovery: 0 00:26:47.499 Transport Requirements: 00:26:47.499 Secure Channel: Not Specified 00:26:47.499 Port ID: 1 (0x0001) 00:26:47.499 Controller ID: 65535 (0xffff) 00:26:47.499 Admin Max SQ Size: 32 00:26:47.499 Transport Service Identifier: 4420 00:26:47.499 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:47.499 Transport Address: 10.0.0.1 00:26:47.499 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:47.499 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.499 get_feature(0x01) failed 00:26:47.499 get_feature(0x02) failed 00:26:47.499 get_feature(0x04) failed 00:26:47.499 ===================================================== 00:26:47.499 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:47.499 ===================================================== 00:26:47.499 Controller Capabilities/Features 00:26:47.499 ================================ 00:26:47.499 Vendor ID: 0000 00:26:47.499 Subsystem Vendor ID: 0000 00:26:47.499 Serial Number: 4ffafa3391b28f04582b 00:26:47.499 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:47.499 Firmware Version: 6.7.0-68 00:26:47.499 Recommended Arb Burst: 6 00:26:47.499 IEEE OUI Identifier: 00 00 00 00:26:47.499 Multi-path I/O 00:26:47.499 May have multiple subsystem ports: Yes 00:26:47.499 May have multiple controllers: Yes 00:26:47.499 Associated with SR-IOV VF: No 00:26:47.499 Max Data Transfer Size: Unlimited 00:26:47.499 Max Number of Namespaces: 1024 00:26:47.499 Max Number of I/O Queues: 128 00:26:47.499 NVMe Specification Version (VS): 1.3 00:26:47.499 NVMe Specification Version (Identify): 1.3 00:26:47.499 Maximum Queue Entries: 1024 00:26:47.499 Contiguous Queues Required: No 00:26:47.499 Arbitration Mechanisms Supported 00:26:47.499 Weighted Round Robin: Not Supported 00:26:47.499 Vendor Specific: Not Supported 00:26:47.499 Reset Timeout: 7500 ms 00:26:47.499 Doorbell Stride: 4 bytes 00:26:47.499 NVM Subsystem Reset: Not Supported 00:26:47.499 Command Sets Supported 00:26:47.499 NVM Command Set: Supported 00:26:47.499 Boot Partition: Not Supported 00:26:47.499 Memory Page Size Minimum: 4096 bytes 00:26:47.499 Memory Page Size Maximum: 4096 bytes 00:26:47.499 Persistent Memory Region: Not Supported 00:26:47.499 Optional Asynchronous Events Supported 00:26:47.499 Namespace Attribute Notices: Supported 00:26:47.499 Firmware Activation Notices: Not Supported 00:26:47.499 ANA Change Notices: Supported 00:26:47.499 PLE Aggregate Log Change Notices: Not Supported 00:26:47.499 LBA Status Info Alert Notices: Not Supported 00:26:47.499 EGE Aggregate Log Change Notices: Not Supported 00:26:47.499 Normal NVM Subsystem Shutdown event: Not Supported 00:26:47.499 Zone Descriptor Change Notices: Not Supported 00:26:47.499 Discovery Log Change Notices: Not Supported 00:26:47.499 Controller Attributes 00:26:47.499 128-bit Host Identifier: Supported 00:26:47.499 Non-Operational Permissive Mode: Not Supported 00:26:47.499 NVM Sets: Not Supported 00:26:47.499 Read Recovery Levels: Not Supported 00:26:47.499 Endurance Groups: Not Supported 00:26:47.499 Predictable Latency Mode: Not Supported 00:26:47.499 Traffic Based Keep ALive: Supported 00:26:47.499 Namespace Granularity: Not Supported 00:26:47.499 SQ Associations: Not Supported 00:26:47.499 UUID List: Not Supported 00:26:47.499 Multi-Domain Subsystem: Not Supported 00:26:47.499 Fixed Capacity Management: Not Supported 00:26:47.499 Variable Capacity Management: Not Supported 00:26:47.499 Delete Endurance Group: Not Supported 00:26:47.499 Delete NVM Set: Not Supported 00:26:47.499 Extended LBA Formats Supported: Not Supported 00:26:47.499 Flexible Data Placement Supported: Not Supported 00:26:47.499 00:26:47.499 Controller Memory Buffer Support 00:26:47.499 ================================ 00:26:47.499 Supported: No 00:26:47.499 00:26:47.499 Persistent Memory Region Support 00:26:47.499 ================================ 00:26:47.499 Supported: No 00:26:47.499 00:26:47.499 Admin Command Set Attributes 00:26:47.499 ============================ 00:26:47.499 Security Send/Receive: Not Supported 00:26:47.499 Format NVM: Not Supported 00:26:47.499 Firmware Activate/Download: Not Supported 00:26:47.499 Namespace Management: Not Supported 00:26:47.499 Device Self-Test: Not Supported 00:26:47.499 Directives: Not Supported 00:26:47.499 NVMe-MI: Not Supported 00:26:47.499 Virtualization Management: Not Supported 00:26:47.499 Doorbell Buffer Config: Not Supported 00:26:47.499 Get LBA Status Capability: Not Supported 00:26:47.499 Command & Feature Lockdown Capability: Not Supported 00:26:47.499 Abort Command Limit: 4 00:26:47.499 Async Event Request Limit: 4 00:26:47.499 Number of Firmware Slots: N/A 00:26:47.499 Firmware Slot 1 Read-Only: N/A 00:26:47.499 Firmware Activation Without Reset: N/A 00:26:47.499 Multiple Update Detection Support: N/A 00:26:47.499 Firmware Update Granularity: No Information Provided 00:26:47.499 Per-Namespace SMART Log: Yes 00:26:47.499 Asymmetric Namespace Access Log Page: Supported 00:26:47.499 ANA Transition Time : 10 sec 00:26:47.499 00:26:47.499 Asymmetric Namespace Access Capabilities 00:26:47.499 ANA Optimized State : Supported 00:26:47.499 ANA Non-Optimized State : Supported 00:26:47.499 ANA Inaccessible State : Supported 00:26:47.499 ANA Persistent Loss State : Supported 00:26:47.499 ANA Change State : Supported 00:26:47.499 ANAGRPID is not changed : No 00:26:47.499 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:47.499 00:26:47.499 ANA Group Identifier Maximum : 128 00:26:47.499 Number of ANA Group Identifiers : 128 00:26:47.499 Max Number of Allowed Namespaces : 1024 00:26:47.499 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:47.499 Command Effects Log Page: Supported 00:26:47.499 Get Log Page Extended Data: Supported 00:26:47.499 Telemetry Log Pages: Not Supported 00:26:47.499 Persistent Event Log Pages: Not Supported 00:26:47.499 Supported Log Pages Log Page: May Support 00:26:47.499 Commands Supported & Effects Log Page: Not Supported 00:26:47.499 Feature Identifiers & Effects Log Page:May Support 00:26:47.499 NVMe-MI Commands & Effects Log Page: May Support 00:26:47.499 Data Area 4 for Telemetry Log: Not Supported 00:26:47.499 Error Log Page Entries Supported: 128 00:26:47.499 Keep Alive: Supported 00:26:47.499 Keep Alive Granularity: 1000 ms 00:26:47.499 00:26:47.499 NVM Command Set Attributes 00:26:47.499 ========================== 00:26:47.499 Submission Queue Entry Size 00:26:47.499 Max: 64 00:26:47.499 Min: 64 00:26:47.499 Completion Queue Entry Size 00:26:47.499 Max: 16 00:26:47.499 Min: 16 00:26:47.499 Number of Namespaces: 1024 00:26:47.499 Compare Command: Not Supported 00:26:47.499 Write Uncorrectable Command: Not Supported 00:26:47.499 Dataset Management Command: Supported 00:26:47.499 Write Zeroes Command: Supported 00:26:47.499 Set Features Save Field: Not Supported 00:26:47.499 Reservations: Not Supported 00:26:47.499 Timestamp: Not Supported 00:26:47.499 Copy: Not Supported 00:26:47.499 Volatile Write Cache: Present 00:26:47.499 Atomic Write Unit (Normal): 1 00:26:47.499 Atomic Write Unit (PFail): 1 00:26:47.499 Atomic Compare & Write Unit: 1 00:26:47.499 Fused Compare & Write: Not Supported 00:26:47.499 Scatter-Gather List 00:26:47.499 SGL Command Set: Supported 00:26:47.499 SGL Keyed: Not Supported 00:26:47.499 SGL Bit Bucket Descriptor: Not Supported 00:26:47.499 SGL Metadata Pointer: Not Supported 00:26:47.499 Oversized SGL: Not Supported 00:26:47.499 SGL Metadata Address: Not Supported 00:26:47.499 SGL Offset: Supported 00:26:47.499 Transport SGL Data Block: Not Supported 00:26:47.499 Replay Protected Memory Block: Not Supported 00:26:47.499 00:26:47.499 Firmware Slot Information 00:26:47.499 ========================= 00:26:47.499 Active slot: 0 00:26:47.499 00:26:47.499 Asymmetric Namespace Access 00:26:47.499 =========================== 00:26:47.499 Change Count : 0 00:26:47.499 Number of ANA Group Descriptors : 1 00:26:47.499 ANA Group Descriptor : 0 00:26:47.499 ANA Group ID : 1 00:26:47.499 Number of NSID Values : 1 00:26:47.499 Change Count : 0 00:26:47.499 ANA State : 1 00:26:47.499 Namespace Identifier : 1 00:26:47.499 00:26:47.499 Commands Supported and Effects 00:26:47.499 ============================== 00:26:47.499 Admin Commands 00:26:47.499 -------------- 00:26:47.499 Get Log Page (02h): Supported 00:26:47.499 Identify (06h): Supported 00:26:47.499 Abort (08h): Supported 00:26:47.499 Set Features (09h): Supported 00:26:47.499 Get Features (0Ah): Supported 00:26:47.499 Asynchronous Event Request (0Ch): Supported 00:26:47.499 Keep Alive (18h): Supported 00:26:47.499 I/O Commands 00:26:47.499 ------------ 00:26:47.499 Flush (00h): Supported 00:26:47.499 Write (01h): Supported LBA-Change 00:26:47.499 Read (02h): Supported 00:26:47.499 Write Zeroes (08h): Supported LBA-Change 00:26:47.499 Dataset Management (09h): Supported 00:26:47.499 00:26:47.499 Error Log 00:26:47.499 ========= 00:26:47.499 Entry: 0 00:26:47.499 Error Count: 0x3 00:26:47.499 Submission Queue Id: 0x0 00:26:47.499 Command Id: 0x5 00:26:47.499 Phase Bit: 0 00:26:47.499 Status Code: 0x2 00:26:47.499 Status Code Type: 0x0 00:26:47.499 Do Not Retry: 1 00:26:47.758 Error Location: 0x28 00:26:47.758 LBA: 0x0 00:26:47.758 Namespace: 0x0 00:26:47.758 Vendor Log Page: 0x0 00:26:47.758 ----------- 00:26:47.758 Entry: 1 00:26:47.758 Error Count: 0x2 00:26:47.758 Submission Queue Id: 0x0 00:26:47.758 Command Id: 0x5 00:26:47.758 Phase Bit: 0 00:26:47.758 Status Code: 0x2 00:26:47.758 Status Code Type: 0x0 00:26:47.758 Do Not Retry: 1 00:26:47.758 Error Location: 0x28 00:26:47.758 LBA: 0x0 00:26:47.758 Namespace: 0x0 00:26:47.758 Vendor Log Page: 0x0 00:26:47.758 ----------- 00:26:47.758 Entry: 2 00:26:47.758 Error Count: 0x1 00:26:47.758 Submission Queue Id: 0x0 00:26:47.758 Command Id: 0x4 00:26:47.758 Phase Bit: 0 00:26:47.758 Status Code: 0x2 00:26:47.758 Status Code Type: 0x0 00:26:47.758 Do Not Retry: 1 00:26:47.758 Error Location: 0x28 00:26:47.758 LBA: 0x0 00:26:47.758 Namespace: 0x0 00:26:47.758 Vendor Log Page: 0x0 00:26:47.758 00:26:47.758 Number of Queues 00:26:47.758 ================ 00:26:47.758 Number of I/O Submission Queues: 128 00:26:47.758 Number of I/O Completion Queues: 128 00:26:47.758 00:26:47.758 ZNS Specific Controller Data 00:26:47.758 ============================ 00:26:47.758 Zone Append Size Limit: 0 00:26:47.758 00:26:47.758 00:26:47.758 Active Namespaces 00:26:47.758 ================= 00:26:47.758 get_feature(0x05) failed 00:26:47.758 Namespace ID:1 00:26:47.758 Command Set Identifier: NVM (00h) 00:26:47.758 Deallocate: Supported 00:26:47.758 Deallocated/Unwritten Error: Not Supported 00:26:47.758 Deallocated Read Value: Unknown 00:26:47.758 Deallocate in Write Zeroes: Not Supported 00:26:47.758 Deallocated Guard Field: 0xFFFF 00:26:47.758 Flush: Supported 00:26:47.758 Reservation: Not Supported 00:26:47.758 Namespace Sharing Capabilities: Multiple Controllers 00:26:47.758 Size (in LBAs): 1953525168 (931GiB) 00:26:47.758 Capacity (in LBAs): 1953525168 (931GiB) 00:26:47.758 Utilization (in LBAs): 1953525168 (931GiB) 00:26:47.758 UUID: 76f3f28e-058b-4e4f-87ac-b534a7897c49 00:26:47.759 Thin Provisioning: Not Supported 00:26:47.759 Per-NS Atomic Units: Yes 00:26:47.759 Atomic Boundary Size (Normal): 0 00:26:47.759 Atomic Boundary Size (PFail): 0 00:26:47.759 Atomic Boundary Offset: 0 00:26:47.759 NGUID/EUI64 Never Reused: No 00:26:47.759 ANA group ID: 1 00:26:47.759 Namespace Write Protected: No 00:26:47.759 Number of LBA Formats: 1 00:26:47.759 Current LBA Format: LBA Format #00 00:26:47.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:47.759 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.759 rmmod nvme_tcp 00:26:47.759 rmmod nvme_fabrics 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.759 11:37:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.662 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.662 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:49.663 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:49.935 11:37:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:52.470 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:52.470 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:52.729 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:53.666 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:53.666 00:26:53.666 real 0m16.263s 00:26:53.666 user 0m4.142s 00:26:53.666 sys 0m8.419s 00:26:53.666 11:37:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.666 11:37:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.666 ************************************ 00:26:53.666 END TEST nvmf_identify_kernel_target 00:26:53.666 ************************************ 00:26:53.666 11:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.666 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:53.666 11:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.666 11:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.666 11:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.666 ************************************ 00:26:53.666 START TEST nvmf_auth_host 00:26:53.666 ************************************ 00:26:53.666 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:53.666 * Looking for test storage... 00:26:53.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.926 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.191 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.191 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.191 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:59.192 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:59.192 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:59.192 Found net devices under 0000:86:00.0: cvl_0_0 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:59.192 Found net devices under 0000:86:00.1: cvl_0_1 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.192 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:26:59.451 00:26:59.451 --- 10.0.0.2 ping statistics --- 00:26:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.451 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:26:59.451 00:26:59.451 --- 10.0.0.1 ping statistics --- 00:26:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.451 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.451 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=736202 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 736202 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 736202 ']' 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.452 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a79ceb4da11012fd9b4add56b7af1eb 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Uo3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a79ceb4da11012fd9b4add56b7af1eb 0 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a79ceb4da11012fd9b4add56b7af1eb 0 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a79ceb4da11012fd9b4add56b7af1eb 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Uo3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Uo3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Uo3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4dfba3e5b4df2c57f1ee6aa40bd2223a5016d0f0f1a1f032596ad2a6d01b051 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6kx 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4dfba3e5b4df2c57f1ee6aa40bd2223a5016d0f0f1a1f032596ad2a6d01b051 3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4dfba3e5b4df2c57f1ee6aa40bd2223a5016d0f0f1a1f032596ad2a6d01b051 3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4dfba3e5b4df2c57f1ee6aa40bd2223a5016d0f0f1a1f032596ad2a6d01b051 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6kx 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6kx 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6kx 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:00.387 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c8e68771e7197647e44b1bd421727eb12db951d41d6c8e8e 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bq0 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c8e68771e7197647e44b1bd421727eb12db951d41d6c8e8e 0 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c8e68771e7197647e44b1bd421727eb12db951d41d6c8e8e 0 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c8e68771e7197647e44b1bd421727eb12db951d41d6c8e8e 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:00.645 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bq0 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bq0 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Bq0 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=779d36ebbad0fdc905a3d693d14eded5a94ef57ad109465d 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rPC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 779d36ebbad0fdc905a3d693d14eded5a94ef57ad109465d 2 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 779d36ebbad0fdc905a3d693d14eded5a94ef57ad109465d 2 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=779d36ebbad0fdc905a3d693d14eded5a94ef57ad109465d 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rPC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rPC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rPC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=581bb06eb485f52a5aeac5848a0ce1fa 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TY8 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 581bb06eb485f52a5aeac5848a0ce1fa 1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 581bb06eb485f52a5aeac5848a0ce1fa 1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=581bb06eb485f52a5aeac5848a0ce1fa 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TY8 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TY8 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TY8 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d7a7045057fd67c0a46919ce293468a 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JRC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d7a7045057fd67c0a46919ce293468a 1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d7a7045057fd67c0a46919ce293468a 1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d7a7045057fd67c0a46919ce293468a 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JRC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JRC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JRC 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:00.645 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a45109b838581fef1b7ccc7e035ff05b20dbf640fe7c7dd2 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ukK 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a45109b838581fef1b7ccc7e035ff05b20dbf640fe7c7dd2 2 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a45109b838581fef1b7ccc7e035ff05b20dbf640fe7c7dd2 2 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a45109b838581fef1b7ccc7e035ff05b20dbf640fe7c7dd2 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:00.646 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ukK 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ukK 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ukK 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=143008b2ee9eb68f727033b257ba0365 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VTk 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 143008b2ee9eb68f727033b257ba0365 0 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 143008b2ee9eb68f727033b257ba0365 0 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=143008b2ee9eb68f727033b257ba0365 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VTk 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VTk 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VTk 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a4b4d5c9fbfe4784f6bbfea98223d82057f53985f571bae7d0d4fd136fc34555 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BCJ 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a4b4d5c9fbfe4784f6bbfea98223d82057f53985f571bae7d0d4fd136fc34555 3 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a4b4d5c9fbfe4784f6bbfea98223d82057f53985f571bae7d0d4fd136fc34555 3 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a4b4d5c9fbfe4784f6bbfea98223d82057f53985f571bae7d0d4fd136fc34555 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BCJ 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BCJ 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BCJ 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 736202 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 736202 ']' 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.903 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Uo3 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6kx ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6kx 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Bq0 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rPC ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rPC 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TY8 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JRC ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JRC 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ukK 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VTk ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VTk 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BCJ 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:01.161 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:03.694 Waiting for block devices as requested 00:27:03.694 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:03.953 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:03.953 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:04.212 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:04.212 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:04.212 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:04.212 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:04.471 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:04.471 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:04.471 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:04.471 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:04.730 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:04.730 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:04.730 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:04.988 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:04.988 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:04.988 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:05.568 11:37:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:05.827 No valid GPT data, bailing 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:05.827 00:27:05.827 Discovery Log Number of Records 2, Generation counter 2 00:27:05.827 =====Discovery Log Entry 0====== 00:27:05.827 trtype: tcp 00:27:05.827 adrfam: ipv4 00:27:05.827 subtype: current discovery subsystem 00:27:05.827 treq: not specified, sq flow control disable supported 00:27:05.827 portid: 1 00:27:05.827 trsvcid: 4420 00:27:05.827 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:05.827 traddr: 10.0.0.1 00:27:05.827 eflags: none 00:27:05.827 sectype: none 00:27:05.827 =====Discovery Log Entry 1====== 00:27:05.827 trtype: tcp 00:27:05.827 adrfam: ipv4 00:27:05.827 subtype: nvme subsystem 00:27:05.827 treq: not specified, sq flow control disable supported 00:27:05.827 portid: 1 00:27:05.827 trsvcid: 4420 00:27:05.827 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:05.827 traddr: 10.0.0.1 00:27:05.827 eflags: none 00:27:05.827 sectype: none 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.827 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.086 nvme0n1 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.086 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.087 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 nvme0n1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.349 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 nvme0n1 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.643 11:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.643 nvme0n1 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.643 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:06.644 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:06.644 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:06.644 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.903 nvme0n1 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.903 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.904 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.162 nvme0n1 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.162 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.163 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.422 nvme0n1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.422 11:37:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 nvme0n1 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.682 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.941 nvme0n1 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.941 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.942 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.201 nvme0n1 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.201 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.202 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.202 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.202 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.202 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.460 nvme0n1 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.460 11:37:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.461 11:37:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.461 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.461 11:37:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.719 nvme0n1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.719 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.978 nvme0n1 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.978 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.237 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.496 nvme0n1 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.496 11:37:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.761 nvme0n1 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.761 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.762 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.021 nvme0n1 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.021 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.022 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.588 nvme0n1 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.588 11:37:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.588 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.589 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.848 nvme0n1 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.848 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.107 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.108 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.367 nvme0n1 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.367 11:37:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 nvme0n1 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.935 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.194 nvme0n1 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.194 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:12.453 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.454 11:37:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.024 nvme0n1 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.024 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.025 11:37:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.593 nvme0n1 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.593 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 nvme0n1 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.529 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.530 11:37:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.098 nvme0n1 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.098 11:37:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 nvme0n1 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.667 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 nvme0n1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.926 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.187 nvme0n1 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.187 nvme0n1 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.187 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.446 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.447 nvme0n1 00:27:16.447 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.447 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.447 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.447 11:37:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.447 11:37:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.447 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.447 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.447 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.447 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.447 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.706 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.707 nvme0n1 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.707 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.968 nvme0n1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.968 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.228 nvme0n1 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.228 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.487 nvme0n1 00:27:17.487 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.487 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.487 11:38:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.487 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.487 11:38:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.487 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.488 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.745 nvme0n1 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.745 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.004 nvme0n1 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.004 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.262 nvme0n1 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.262 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.520 11:38:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.778 nvme0n1 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.778 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.037 nvme0n1 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.037 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.296 nvme0n1 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.296 11:38:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 nvme0n1 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.554 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.812 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.813 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.073 nvme0n1 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.073 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.074 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.710 nvme0n1 00:27:20.710 11:38:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.710 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.711 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.969 nvme0n1 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.969 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.536 nvme0n1 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.536 11:38:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.795 nvme0n1 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.795 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.054 11:38:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.622 nvme0n1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.622 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 nvme0n1 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.190 11:38:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.758 nvme0n1 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.758 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.016 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.016 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.016 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.584 nvme0n1 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.584 11:38:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.584 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 nvme0n1 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.152 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.411 nvme0n1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.411 11:38:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.670 nvme0n1 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.670 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.671 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 nvme0n1 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 nvme0n1 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.930 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.189 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.190 nvme0n1 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.190 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 nvme0n1 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.449 11:38:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.449 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.708 nvme0n1 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.708 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.967 nvme0n1 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.967 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.968 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.227 nvme0n1 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.227 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.228 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.489 nvme0n1 00:27:27.489 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.489 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.489 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.489 11:38:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.489 11:38:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:27.489 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.490 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.752 nvme0n1 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.752 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.270 nvme0n1 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.270 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.529 nvme0n1 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.529 11:38:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:28.529 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.530 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.787 nvme0n1 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.787 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.046 nvme0n1 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.046 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:29.304 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.305 11:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.564 nvme0n1 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.564 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.565 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.139 nvme0n1 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.139 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.398 nvme0n1 00:27:30.398 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:30.399 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.657 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.657 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.658 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.658 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.658 11:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:30.658 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.658 11:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.658 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.917 nvme0n1 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.917 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.918 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 nvme0n1 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGE3OWNlYjRkYTExMDEyZmQ5YjRhZGQ1NmI3YWYxZWK2EtH1: 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRkZmJhM2U1YjRkZjJjNTdmMWVlNmFhNDBiZDIyMjNhNTAxNmQwZjBmMWExZjAzMjU5NmFkMmE2ZDAxYjA1MUYr9Uk=: 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.486 11:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.053 nvme0n1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.053 11:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.619 nvme0n1 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmIwNmViNDg1ZjUyYTVhZWFjNTg0OGEwY2UxZmEjLj7T: 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: ]] 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ3YTcwNDUwNTdmZDY3YzBhNDY5MTljZTI5MzQ2OGErnbv8: 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.619 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.878 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.445 nvme0n1 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MTA5YjgzODU4MWZlZjFiN2NjYzdlMDM1ZmYwNWIyMGRiZjY0MGZlN2M3ZGQyHSQgdA==: 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTQzMDA4YjJlZTllYjY4ZjcyNzAzM2IyNTdiYTAzNjXMjv1t: 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.445 11:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.012 nvme0n1 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTRiNGQ1YzlmYmZlNDc4NGY2YmJmZWE5ODIyM2Q4MjA1N2Y1Mzk4NWY1NzFiYWU3ZDBkNGZkMTM2ZmMzNDU1NcS8qgI=: 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.012 11:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.596 nvme0n1 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzhlNjg3NzFlNzE5NzY0N2U0NGIxYmQ0MjE3MjdlYjEyZGI5NTFkNDFkNmM4ZThl1QYRfQ==: 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzc5ZDM2ZWJiYWQwZmRjOTA1YTNkNjkzZDE0ZWRlZDVhOTRlZjU3YWQxMDk0NjVkhOyQ/Q==: 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.596 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.862 request: 00:27:34.862 { 00:27:34.862 "name": "nvme0", 00:27:34.862 "trtype": "tcp", 00:27:34.862 "traddr": "10.0.0.1", 00:27:34.862 "adrfam": "ipv4", 00:27:34.862 "trsvcid": "4420", 00:27:34.862 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.862 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.862 "prchk_reftag": false, 00:27:34.862 "prchk_guard": false, 00:27:34.862 "hdgst": false, 00:27:34.862 "ddgst": false, 00:27:34.862 "method": "bdev_nvme_attach_controller", 00:27:34.862 "req_id": 1 00:27:34.862 } 00:27:34.862 Got JSON-RPC error response 00:27:34.862 response: 00:27:34.862 { 00:27:34.862 "code": -5, 00:27:34.862 "message": "Input/output error" 00:27:34.862 } 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.862 request: 00:27:34.862 { 00:27:34.862 "name": "nvme0", 00:27:34.862 "trtype": "tcp", 00:27:34.862 "traddr": "10.0.0.1", 00:27:34.862 "adrfam": "ipv4", 00:27:34.862 "trsvcid": "4420", 00:27:34.862 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.862 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.862 "prchk_reftag": false, 00:27:34.862 "prchk_guard": false, 00:27:34.862 "hdgst": false, 00:27:34.862 "ddgst": false, 00:27:34.862 "dhchap_key": "key2", 00:27:34.862 "method": "bdev_nvme_attach_controller", 00:27:34.862 "req_id": 1 00:27:34.862 } 00:27:34.862 Got JSON-RPC error response 00:27:34.862 response: 00:27:34.862 { 00:27:34.862 "code": -5, 00:27:34.862 "message": "Input/output error" 00:27:34.862 } 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.862 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.121 request: 00:27:35.121 { 00:27:35.121 "name": "nvme0", 00:27:35.121 "trtype": "tcp", 00:27:35.121 "traddr": "10.0.0.1", 00:27:35.121 "adrfam": "ipv4", 00:27:35.121 "trsvcid": "4420", 00:27:35.121 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:35.121 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:35.121 "prchk_reftag": false, 00:27:35.121 "prchk_guard": false, 00:27:35.121 "hdgst": false, 00:27:35.121 "ddgst": false, 00:27:35.121 "dhchap_key": "key1", 00:27:35.121 "dhchap_ctrlr_key": "ckey2", 00:27:35.121 "method": "bdev_nvme_attach_controller", 00:27:35.121 "req_id": 1 00:27:35.121 } 00:27:35.121 Got JSON-RPC error response 00:27:35.121 response: 00:27:35.121 { 00:27:35.121 "code": -5, 00:27:35.121 "message": "Input/output error" 00:27:35.121 } 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.121 rmmod nvme_tcp 00:27:35.121 rmmod nvme_fabrics 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 736202 ']' 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 736202 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 736202 ']' 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 736202 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 736202 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 736202' 00:27:35.121 killing process with pid 736202 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 736202 00:27:35.121 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 736202 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.380 11:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:37.285 11:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:40.572 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:40.572 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:41.139 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:41.139 11:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Uo3 /tmp/spdk.key-null.Bq0 /tmp/spdk.key-sha256.TY8 /tmp/spdk.key-sha384.ukK /tmp/spdk.key-sha512.BCJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:41.139 11:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:44.427 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:44.428 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:44.428 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:44.428 00:27:44.428 real 0m50.329s 00:27:44.428 user 0m45.052s 00:27:44.428 sys 0m12.188s 00:27:44.428 11:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.428 11:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 ************************************ 00:27:44.428 END TEST nvmf_auth_host 00:27:44.428 ************************************ 00:27:44.428 11:38:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:44.428 11:38:27 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:27:44.428 11:38:27 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:44.428 11:38:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.428 11:38:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.428 11:38:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 ************************************ 00:27:44.428 START TEST nvmf_digest 00:27:44.428 ************************************ 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:44.428 * Looking for test storage... 00:27:44.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.428 11:38:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:49.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:49.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:49.743 Found net devices under 0000:86:00.0: cvl_0_0 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:49.743 Found net devices under 0000:86:00.1: cvl_0_1 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:49.743 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:27:50.001 00:27:50.001 --- 10.0.0.2 ping statistics --- 00:27:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.001 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:27:50.001 00:27:50.001 --- 10.0.0.1 ping statistics --- 00:27:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.001 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.001 ************************************ 00:27:50.001 START TEST nvmf_digest_clean 00:27:50.001 ************************************ 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.001 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=749397 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 749397 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 749397 ']' 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.002 11:38:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.002 [2024-07-15 11:38:33.531853] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:27:50.002 [2024-07-15 11:38:33.531893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.002 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.260 [2024-07-15 11:38:33.603037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.260 [2024-07-15 11:38:33.685407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.260 [2024-07-15 11:38:33.685445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.260 [2024-07-15 11:38:33.685452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.260 [2024-07-15 11:38:33.685458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.260 [2024-07-15 11:38:33.685462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.260 [2024-07-15 11:38:33.685486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.828 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.828 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:50.828 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.828 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.828 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.829 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.088 null0 00:27:51.088 [2024-07-15 11:38:34.457854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.088 [2024-07-15 11:38:34.482046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=749634 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 749634 /var/tmp/bperf.sock 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 749634 ']' 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.088 11:38:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.088 [2024-07-15 11:38:34.534345] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:27:51.088 [2024-07-15 11:38:34.534388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749634 ] 00:27:51.088 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.088 [2024-07-15 11:38:34.602883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.088 [2024-07-15 11:38:34.676330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.022 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.022 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:52.022 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.022 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.023 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:52.023 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.023 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.591 nvme0n1 00:27:52.591 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:52.591 11:38:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.591 Running I/O for 2 seconds... 00:27:54.496 00:27:54.496 Latency(us) 00:27:54.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.496 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:54.496 nvme0n1 : 2.00 25178.75 98.35 0.00 0.00 5079.07 2678.43 18805.98 00:27:54.496 =================================================================================================================== 00:27:54.496 Total : 25178.75 98.35 0.00 0.00 5079.07 2678.43 18805.98 00:27:54.496 0 00:27:54.496 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:54.496 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:54.496 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:54.496 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:54.496 | select(.opcode=="crc32c") 00:27:54.496 | "\(.module_name) \(.executed)"' 00:27:54.496 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 749634 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 749634 ']' 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 749634 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749634 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:54.755 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:54.756 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749634' 00:27:54.756 killing process with pid 749634 00:27:54.756 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 749634 00:27:54.756 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.756 00:27:54.756 Latency(us) 00:27:54.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.756 =================================================================================================================== 00:27:54.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.756 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 749634 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=750331 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 750331 /var/tmp/bperf.sock 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 750331 ']' 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.015 11:38:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.015 [2024-07-15 11:38:38.488326] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:27:55.015 [2024-07-15 11:38:38.488374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750331 ] 00:27:55.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.015 Zero copy mechanism will not be used. 00:27:55.015 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.015 [2024-07-15 11:38:38.555731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.288 [2024-07-15 11:38:38.628471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.910 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.910 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:55.910 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:55.910 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:55.910 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.169 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.169 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.428 nvme0n1 00:27:56.428 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.428 11:38:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.428 Zero copy mechanism will not be used. 00:27:56.428 Running I/O for 2 seconds... 00:27:58.333 00:27:58.333 Latency(us) 00:27:58.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:58.333 nvme0n1 : 2.00 4870.23 608.78 0.00 0.00 3282.37 794.27 6895.53 00:27:58.333 =================================================================================================================== 00:27:58.333 Total : 4870.23 608.78 0.00 0.00 3282.37 794.27 6895.53 00:27:58.333 0 00:27:58.333 11:38:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.333 11:38:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.333 11:38:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.333 11:38:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.333 | select(.opcode=="crc32c") 00:27:58.333 | "\(.module_name) \(.executed)"' 00:27:58.333 11:38:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 750331 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 750331 ']' 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 750331 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750331 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750331' 00:27:58.592 killing process with pid 750331 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 750331 00:27:58.592 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.592 00:27:58.592 Latency(us) 00:27:58.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.592 =================================================================================================================== 00:27:58.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.592 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 750331 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:58.850 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=750915 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 750915 /var/tmp/bperf.sock 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 750915 ']' 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.851 11:38:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.851 [2024-07-15 11:38:42.333217] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:27:58.851 [2024-07-15 11:38:42.333272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750915 ] 00:27:58.851 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.851 [2024-07-15 11:38:42.402216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.110 [2024-07-15 11:38:42.482931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.679 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.679 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:59.679 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.679 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.679 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.938 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.938 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.197 nvme0n1 00:28:00.197 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.197 11:38:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.197 Running I/O for 2 seconds... 00:28:02.731 00:28:02.731 Latency(us) 00:28:02.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.731 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:02.731 nvme0n1 : 2.00 27223.44 106.34 0.00 0.00 4693.90 4502.04 14019.01 00:28:02.731 =================================================================================================================== 00:28:02.731 Total : 27223.44 106.34 0.00 0.00 4693.90 4502.04 14019.01 00:28:02.731 0 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.731 | select(.opcode=="crc32c") 00:28:02.731 | "\(.module_name) \(.executed)"' 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 750915 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 750915 ']' 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 750915 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.731 11:38:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750915 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750915' 00:28:02.731 killing process with pid 750915 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 750915 00:28:02.731 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.731 00:28:02.731 Latency(us) 00:28:02.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.731 =================================================================================================================== 00:28:02.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 750915 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=751514 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 751514 /var/tmp/bperf.sock 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 751514 ']' 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.731 11:38:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.731 [2024-07-15 11:38:46.243727] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:02.731 [2024-07-15 11:38:46.243773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751514 ] 00:28:02.731 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.731 Zero copy mechanism will not be used. 00:28:02.731 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.731 [2024-07-15 11:38:46.312312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.990 [2024-07-15 11:38:46.392383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.558 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.558 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:03.558 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.558 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.558 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.816 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.816 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.383 nvme0n1 00:28:04.383 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.383 11:38:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.383 Zero copy mechanism will not be used. 00:28:04.383 Running I/O for 2 seconds... 00:28:06.287 00:28:06.287 Latency(us) 00:28:06.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.287 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:06.287 nvme0n1 : 2.00 6723.00 840.37 0.00 0.00 2376.23 1624.15 10029.86 00:28:06.287 =================================================================================================================== 00:28:06.288 Total : 6723.00 840.37 0.00 0.00 2376.23 1624.15 10029.86 00:28:06.288 0 00:28:06.288 11:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.288 11:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.288 11:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.288 11:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.288 | select(.opcode=="crc32c") 00:28:06.288 | "\(.module_name) \(.executed)"' 00:28:06.288 11:38:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 751514 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 751514 ']' 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 751514 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 751514 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 751514' 00:28:06.547 killing process with pid 751514 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 751514 00:28:06.547 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.547 00:28:06.547 Latency(us) 00:28:06.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.547 =================================================================================================================== 00:28:06.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.547 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 751514 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 749397 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 749397 ']' 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 749397 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749397 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749397' 00:28:06.806 killing process with pid 749397 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 749397 00:28:06.806 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 749397 00:28:07.065 00:28:07.065 real 0m17.009s 00:28:07.065 user 0m32.511s 00:28:07.065 sys 0m4.587s 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.065 ************************************ 00:28:07.065 END TEST nvmf_digest_clean 00:28:07.065 ************************************ 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:07.065 11:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.066 ************************************ 00:28:07.066 START TEST nvmf_digest_error 00:28:07.066 ************************************ 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=752236 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 752236 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 752236 ']' 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.066 11:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.066 [2024-07-15 11:38:50.609710] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:07.066 [2024-07-15 11:38:50.609746] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.066 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.325 [2024-07-15 11:38:50.677470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.325 [2024-07-15 11:38:50.755733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.325 [2024-07-15 11:38:50.755767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.325 [2024-07-15 11:38:50.755774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.325 [2024-07-15 11:38:50.755781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.325 [2024-07-15 11:38:50.755785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.325 [2024-07-15 11:38:50.755803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.894 [2024-07-15 11:38:51.453851] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.894 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.153 null0 00:28:08.153 [2024-07-15 11:38:51.546751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.153 [2024-07-15 11:38:51.570927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=752481 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 752481 /var/tmp/bperf.sock 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 752481 ']' 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.153 11:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.153 [2024-07-15 11:38:51.618520] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:08.153 [2024-07-15 11:38:51.618564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752481 ] 00:28:08.153 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.153 [2024-07-15 11:38:51.686770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.413 [2024-07-15 11:38:51.767031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.981 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.981 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:08.981 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.981 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.239 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.498 nvme0n1 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:09.498 11:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.498 Running I/O for 2 seconds... 00:28:09.498 [2024-07-15 11:38:52.964747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:52.964783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:52.964794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:52.976585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:52.976615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:52.976624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:52.984642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:52.984667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:52.984676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:52.995380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:52.995404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:52.995412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:53.004304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:53.004326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:53.004335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:53.013930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:53.013951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.498 [2024-07-15 11:38:53.013960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.498 [2024-07-15 11:38:53.023550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.498 [2024-07-15 11:38:53.023572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.023580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.033100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.033121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.033130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.042409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.042429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.042437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.051704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.051726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.051734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.060653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.060676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.060684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.069825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.069846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.069854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.499 [2024-07-15 11:38:53.078356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.499 [2024-07-15 11:38:53.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.499 [2024-07-15 11:38:53.078385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.089434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.089456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.089465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.099382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.099402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.099415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.108220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.108245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.108253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.116921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.116942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.116950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.127053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.127074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.127082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.136795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.136816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.136824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.146030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.146050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.146058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.155005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.164374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.164394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.164402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.175815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.175835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.175843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.184701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.184726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.184734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.193994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.194014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.203415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.203435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.203443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.212322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.212343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.212351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.221175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.221196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.221204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.230746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.230767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.230775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.240637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.240657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.240665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.249654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.758 [2024-07-15 11:38:53.249675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.758 [2024-07-15 11:38:53.249683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.758 [2024-07-15 11:38:53.258980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.259000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.259008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.269899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.269919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.269927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.278898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.278919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.278926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.291881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.291901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.291909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.299952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.299972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.299980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.311851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.311872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.311881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.320221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.320246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.320254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.330917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.330946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.759 [2024-07-15 11:38:53.342841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:09.759 [2024-07-15 11:38:53.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.759 [2024-07-15 11:38:53.342870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.352970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.352990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.353002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.361858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.361879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.361887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.372168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.372189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.372196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.380754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.380774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.390432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.390452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.390460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.399198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.399218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.408476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.408504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.419215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.419242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.419250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.427585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.427606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.427614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.437741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.437762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.437770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.446844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.446863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.446871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.457153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.457173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.457181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.465677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.465696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.465704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.477435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.477455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.488476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.488496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.488504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.497198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.497219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.497232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.508485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.508508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.508516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.516915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.516947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.528122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.528153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.540460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.540484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.540495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.548214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.548241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.548251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.558723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.558745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.558753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.570439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.570460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.579352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.579373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.579382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.590544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.590564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.590572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.018 [2024-07-15 11:38:53.601979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.018 [2024-07-15 11:38:53.602000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.018 [2024-07-15 11:38:53.602008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.610776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.610800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.610808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.619478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.619499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.619507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.629034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.629056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.629064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.638213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.638239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.638247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.649675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.649696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.649705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.657968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.657989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.657997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.667712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.667733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.667742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.678151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.678172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.678180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.687207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.687234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.687242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.696346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.696367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.696376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.704704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.704724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.704732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.715537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.715558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.715566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.725191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.725213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.725220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.736054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.736083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.745068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.745088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.745097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.756105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.756128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.756136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.767454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.767477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.767485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.777584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.777605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.777615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.785507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.785528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.785536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.796783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.796813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.806964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.806986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.806994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.815628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.815649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.815657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.278 [2024-07-15 11:38:53.826043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.278 [2024-07-15 11:38:53.826064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.278 [2024-07-15 11:38:53.826072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.279 [2024-07-15 11:38:53.836106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.279 [2024-07-15 11:38:53.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.279 [2024-07-15 11:38:53.836135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.279 [2024-07-15 11:38:53.845489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.279 [2024-07-15 11:38:53.845509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.279 [2024-07-15 11:38:53.845517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.279 [2024-07-15 11:38:53.854566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.279 [2024-07-15 11:38:53.854587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.279 [2024-07-15 11:38:53.854595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.279 [2024-07-15 11:38:53.865035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.279 [2024-07-15 11:38:53.865057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.279 [2024-07-15 11:38:53.865065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.873994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.874014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.874023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.883507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.883536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.893172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.893193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.893201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.902624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.902645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.902653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.912161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.912189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.922514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.922535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.922543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.932589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.932610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.932618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.940787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.940818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.951014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.951035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.951043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.960491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.960522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.969640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.969660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.969668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.979802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.979823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.979831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:53.988366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:53.988387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:53.988395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.000032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.000060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.011701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.011722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.011730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.019556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.019576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.019584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.029407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.029432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.029440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.041557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.041579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.041587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.049969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.049989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.049997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.060614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.060634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.060642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.069305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.069327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.069336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.080965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.080988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.080995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.092182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.092202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.092210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.100715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.100734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.100742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.110825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.110854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.120280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.120300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.120309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.539 [2024-07-15 11:38:54.129138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.539 [2024-07-15 11:38:54.129158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.539 [2024-07-15 11:38:54.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.139504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.139527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.139536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.149980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.150001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.150010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.159297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.159318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.159325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.168270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.168291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.168299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.177760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.177780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.177787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.186910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.186938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.197138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.197170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.205765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.205786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.205794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.215223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.215248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.215256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.224885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.224906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.224914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.234332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.234352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.234360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.243777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.243796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.243805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.253042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.253062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.253070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.262494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.262516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.262524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.272778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.272800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.272808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.283051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.283071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.283079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.291823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.291843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.291851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.300766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.300786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.300794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.311008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.311028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.311036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.319611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.319632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.319640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.331334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.331355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.331363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.340757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.340778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.349307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.349335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.359048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.359069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.359081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.800 [2024-07-15 11:38:54.368762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.800 [2024-07-15 11:38:54.368783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.800 [2024-07-15 11:38:54.368791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.801 [2024-07-15 11:38:54.379334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.801 [2024-07-15 11:38:54.379354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.801 [2024-07-15 11:38:54.379362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.801 [2024-07-15 11:38:54.389186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:10.801 [2024-07-15 11:38:54.389207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.801 [2024-07-15 11:38:54.389215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.397005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.397027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.397036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.407662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.407682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.407691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.416291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.416312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.416320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.426472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.426492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.436128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.436148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.436155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.445141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.445166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.445174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.454492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.454513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.454521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.464626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.464646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.464654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.472471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.472491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.485116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.485137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.493039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.493059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.493067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.503076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.503096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.503104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.512013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.512033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.512041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.522012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.522040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.532018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.532038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.532046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.541713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.541734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.541742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.549839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.549859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.061 [2024-07-15 11:38:54.549868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.061 [2024-07-15 11:38:54.560134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.061 [2024-07-15 11:38:54.560154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.560162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.569925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.569946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.578444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.578464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.578472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.589684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.589705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.589713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.597961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.597982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.597990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.609938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.609959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.609970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.618209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.618234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.618242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.629878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.629901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.629910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.639775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.639796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.639804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.062 [2024-07-15 11:38:54.649882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.062 [2024-07-15 11:38:54.649903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.062 [2024-07-15 11:38:54.649912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.659128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.659149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.659158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.667678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.667698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.667707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.678022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.678044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.687624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.687645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.687653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.696423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.696450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.696469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.706060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.706081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.706089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.715262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.715283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.715291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.724118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.724139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.724147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.734201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.734222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.734237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.323 [2024-07-15 11:38:54.743395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.323 [2024-07-15 11:38:54.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.323 [2024-07-15 11:38:54.743424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.751865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.751885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.751894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.762075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.762095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.762103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.771714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.771746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.780346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.780367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.780375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.790431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.790451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.790459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.801376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.801397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.801405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.810018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.810039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.810047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.820664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.820692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.832165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.832187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.844940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.844961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.844969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.853248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.853284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.853292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.869026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.869050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.869058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.877870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.877890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.877898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.890238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.890258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.890266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.324 [2024-07-15 11:38:54.902702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.324 [2024-07-15 11:38:54.902722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.324 [2024-07-15 11:38:54.902730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.626 [2024-07-15 11:38:54.916237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.626 [2024-07-15 11:38:54.916259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.626 [2024-07-15 11:38:54.916267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.626 [2024-07-15 11:38:54.927891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.626 [2024-07-15 11:38:54.927911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.626 [2024-07-15 11:38:54.927919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.626 [2024-07-15 11:38:54.936572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.626 [2024-07-15 11:38:54.936593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.626 [2024-07-15 11:38:54.936601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.626 [2024-07-15 11:38:54.949287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x236cf20) 00:28:11.626 [2024-07-15 11:38:54.949308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.626 [2024-07-15 11:38:54.949317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.626 00:28:11.626 Latency(us) 00:28:11.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:11.626 nvme0n1 : 2.01 25899.61 101.17 0.00 0.00 4937.85 2350.75 18008.15 00:28:11.626 =================================================================================================================== 00:28:11.626 Total : 25899.61 101.17 0.00 0.00 4937.85 2350.75 18008.15 00:28:11.626 0 00:28:11.626 11:38:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:11.626 11:38:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:11.626 11:38:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:11.626 | .driver_specific 00:28:11.626 | .nvme_error 00:28:11.626 | .status_code 00:28:11.626 | .command_transient_transport_error' 00:28:11.626 11:38:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 752481 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 752481 ']' 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 752481 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 752481 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 752481' 00:28:11.626 killing process with pid 752481 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 752481 00:28:11.626 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.626 00:28:11.626 Latency(us) 00:28:11.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.626 =================================================================================================================== 00:28:11.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.626 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 752481 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=753176 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 753176 /var/tmp/bperf.sock 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 753176 ']' 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.885 11:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.885 [2024-07-15 11:38:55.427181] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:11.885 [2024-07-15 11:38:55.427231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753176 ] 00:28:11.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.885 Zero copy mechanism will not be used. 00:28:11.885 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.144 [2024-07-15 11:38:55.495526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.144 [2024-07-15 11:38:55.575093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.711 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.711 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:12.711 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.711 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.969 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.227 nvme0n1 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:13.227 11:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.486 Zero copy mechanism will not be used. 00:28:13.486 Running I/O for 2 seconds... 00:28:13.486 [2024-07-15 11:38:56.906985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.907018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.907028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.914237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.914263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.914272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.921106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.921128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.921136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.927870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.927899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.934432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.934453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.934461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.940769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.940789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.940797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.947103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.947124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.947133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.953201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.953221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.959042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.959063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.959070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.964922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.964942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.964950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.970607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.970630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.970638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.976339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.976368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.982042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.982062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.982070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.987797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.486 [2024-07-15 11:38:56.987818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.486 [2024-07-15 11:38:56.987825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.486 [2024-07-15 11:38:56.993562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:56.993582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:56.993591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:56.999221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:56.999246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:56.999254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.004906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.004927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.004935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.010450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.010478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.016076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.016096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.016104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.021784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.021805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.021813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.027385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.027405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.027413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.032948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.032968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.032976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.038606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.038626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.038634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.044390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.044411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.044419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.049939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.049959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.049967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.055490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.055509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.055519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.061089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.061110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.061118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.066683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.066704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.066715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.487 [2024-07-15 11:38:57.072365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.487 [2024-07-15 11:38:57.072386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.487 [2024-07-15 11:38:57.072394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.768 [2024-07-15 11:38:57.078072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.768 [2024-07-15 11:38:57.078093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.768 [2024-07-15 11:38:57.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.768 [2024-07-15 11:38:57.083715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.083735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.083743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.089327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.089347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.089355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.095033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.095053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.095061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.100666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.100686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.100694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.106223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.106250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.106258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.111759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.111779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.111787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.117321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.117345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.117353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.122886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.122908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.122916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.128366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.128389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.128397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.133784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.133807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.133815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.139254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.139275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.139283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.144757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.144778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.144786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.150306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.150327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.150334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.155876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.155897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.155905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.161303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.161324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.161331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.166700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.166721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.166729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.172222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.172248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.172256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.177814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.177834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.177842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.183491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.183512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.183521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.188977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.188997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.189005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.194458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.194479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.194487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.199969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.199990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.205469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.205489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.205497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.210942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.210973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.216492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.216522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.221847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.221867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.221875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.227202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.227223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.227238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.232507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.232527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.769 [2024-07-15 11:38:57.232535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.769 [2024-07-15 11:38:57.237969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.769 [2024-07-15 11:38:57.237989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.237997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.243495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.243515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.243524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.248948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.248968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.248976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.254446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.254466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.254475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.259918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.259939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.259947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.265234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.265254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.265262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.270722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.270741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.270750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.276232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.276269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.276278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.281742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.281763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.281771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.287214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.287242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.292497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.292518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.292526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.297791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.297819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.303149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.303170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.303181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.308526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.308547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.308555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.313992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.319487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.319507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.319515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.325031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.325051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.325059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.330485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.335948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.335969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.335977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.341448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.341469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.341477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.347005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.347026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.347034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.770 [2024-07-15 11:38:57.352539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:13.770 [2024-07-15 11:38:57.352563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.770 [2024-07-15 11:38:57.352570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.358211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.358237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.358245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.363659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.363680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.363688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.369060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.369080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.369088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.374621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.374641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.374649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.380207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.380233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.380241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.385897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.385918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.385926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.391650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.391671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.391679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.397234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.397255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.397266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.402840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.402860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.402868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.408539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.408559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.408567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.414254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.414275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.419759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.419780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.419789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.425467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.425488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.425496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.030 [2024-07-15 11:38:57.432094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.030 [2024-07-15 11:38:57.432118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-15 11:38:57.432126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.437784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.437804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.437813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.443581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.443602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.443610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.449426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.449452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.449460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.455143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.455164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.455172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.460885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.460906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.460914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.466595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.466615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.466623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.471851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.471873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.471881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.477206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.477234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.477242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.482863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.482885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.482893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.488739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.488761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.488770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.494874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.494904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.501322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.501344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.506773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.506803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.512636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.512656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.512664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.518386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.518407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.518416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.524150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.524171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.524179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.529911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.529933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.529941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.535517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.535539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.535547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.541064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.541084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-15 11:38:57.541092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.031 [2024-07-15 11:38:57.546277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.031 [2024-07-15 11:38:57.546298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.546310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.551721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.551744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.551752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.557158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.557178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.557187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.562609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.562630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.562639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.568007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.568028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.573404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.573424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.573433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.578714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.578735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.578743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.584033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.584055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.584062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.589403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.589423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.589430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.594973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.594997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.595005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.600530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.600551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.600559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.606184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.606206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.606214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.611773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.611794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.611802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.032 [2024-07-15 11:38:57.617366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.032 [2024-07-15 11:38:57.617388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-15 11:38:57.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.623072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.623094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.623102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.628828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.628851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.628859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.634564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.634586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.640086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.640109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.640117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.645751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.645771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.645779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.651491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.651513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.657018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.657039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.657047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.662604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.662625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.662633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.668248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.668269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.668276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.673951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.673971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.673980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.679715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.679745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.685332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.690907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.690928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.690939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.696646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.696668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.696677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.702405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.702426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.702434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.707989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.708010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.708018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.713676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.713697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.713705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.719396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.719418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.719426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.725034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.725063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.730557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.730578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.730586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.736131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.736153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.736161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.741670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.741691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.741699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.747319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.747348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.753061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.753082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.753090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.758672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.758693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.758701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.764206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.764233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.764241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.769745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.769767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.769775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.775236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.775257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.775265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.780749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.780778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.786282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.786303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.786314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.791660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.791682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.791690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.797135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.797156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.797164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.802657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.802679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.808285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.808306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.808314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.813859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.813881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.813888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.819259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.819281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.824593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.824613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.824622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.829963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.829984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.829992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.835315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.835339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.835347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.840795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.840816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.840825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.846349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.846372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.851887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.851910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.851919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.857432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.857456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.857465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.862879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.862903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.862911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.868402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.868424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.868433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.873957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.873981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.873989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.292 [2024-07-15 11:38:57.879557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.292 [2024-07-15 11:38:57.879579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.292 [2024-07-15 11:38:57.879587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.551 [2024-07-15 11:38:57.885251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.551 [2024-07-15 11:38:57.885273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.551 [2024-07-15 11:38:57.885281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.551 [2024-07-15 11:38:57.890868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.551 [2024-07-15 11:38:57.890890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.890899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.896531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.896554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.896562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.902275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.902296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.902305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.907910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.907933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.907941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.913451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.913472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.919089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.919111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.919119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.924765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.924787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.924796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.930399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.930420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.930432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.936033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.936054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.936063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.941587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.941609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.941617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.947246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.947269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.947278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.952979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.953002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.958660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.958683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.958691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.964303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.964324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.964332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.970097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.970118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.970126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.975845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.975865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.975873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.981403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.981428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.981438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.986993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.987014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.987022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.992705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.992725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:57.998392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:57.998414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:57.998422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.004017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.004038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.004045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.009602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.009623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.009631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.015107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.015129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.015137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.020802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.020824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.020832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.026488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.026511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.032039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.032061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.032069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.037508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.037530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.037538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.043051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.043073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.043081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.048755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.048776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.552 [2024-07-15 11:38:58.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.552 [2024-07-15 11:38:58.054470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.552 [2024-07-15 11:38:58.054493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.054501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.060195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.060218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.065976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.065997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.066004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.072007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.072028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.072037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.077782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.077804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.077815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.083399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.083421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.083430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.089120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.089141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.089150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.094812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.094833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.094841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.100404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.100425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.100433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.106023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.106045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.106053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.111537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.111559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.117240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.117261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.117270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.123163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.123185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.123193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.128805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.128827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.128835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.134257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.134280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.134288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.553 [2024-07-15 11:38:58.139767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.553 [2024-07-15 11:38:58.139790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.553 [2024-07-15 11:38:58.139799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.145363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.812 [2024-07-15 11:38:58.145385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.812 [2024-07-15 11:38:58.145394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.150870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.812 [2024-07-15 11:38:58.150894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.812 [2024-07-15 11:38:58.150903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.156417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.812 [2024-07-15 11:38:58.156439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.812 [2024-07-15 11:38:58.156447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.161983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.812 [2024-07-15 11:38:58.162004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.812 [2024-07-15 11:38:58.162013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.167390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.812 [2024-07-15 11:38:58.167411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.812 [2024-07-15 11:38:58.167419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.812 [2024-07-15 11:38:58.172814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.172835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.172850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.178349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.178371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.178380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.184287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.184309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.184316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.190037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.190060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.190069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.195763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.195785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.195792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.201424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.201446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.201454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.207187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.207208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.207216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.213643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.213665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.213674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.221068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.221090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.221099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.227845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.227872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.227881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.234349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.234372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.234380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.240697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.240727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.247969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.247992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.248001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.255487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.255512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.255520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.262898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.262925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.262933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.269977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.269999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.270007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.276053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.276075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.276082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.282089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.282111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.282118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.288343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.288364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.288372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.294141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.294162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.294170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.300286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.300307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.300315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.306808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.306830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.306838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.313028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.313049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.813 [2024-07-15 11:38:58.313058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.813 [2024-07-15 11:38:58.318677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.813 [2024-07-15 11:38:58.318699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.318707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.324406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.324428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.324435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.330262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.330283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.330291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.335921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.335955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.341834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.341856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.341864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.348070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.348093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.348101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.354196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.354231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.360205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.360233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.360241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.367042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.367064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.367073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.372555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.372577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.372585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.378369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.378390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.378398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.385556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.385579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.385587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.814 [2024-07-15 11:38:58.394453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:14.814 [2024-07-15 11:38:58.394479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.814 [2024-07-15 11:38:58.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.402886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.402910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.402918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.411864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.411887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.411895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.420725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.420748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.420757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.429817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.429840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.429849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.438447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.438469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.438478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.445625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.445648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.445656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.452783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.452805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.452813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.459599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.073 [2024-07-15 11:38:58.459629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.073 [2024-07-15 11:38:58.466917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.073 [2024-07-15 11:38:58.466938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.475370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.475393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.475401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.483042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.483064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.483072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.490367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.490389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.490398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.497286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.497307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.497315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.500913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.500943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.507672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.507694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.507702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.514182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.514205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.514213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.523646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.523669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.523683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.532326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.532349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.532357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.541137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.541159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.541168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.550568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.550590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.550599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.559345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.559367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.559376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.568629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.568652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.568661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.578405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.578427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.578436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.587827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.587848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.596816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.596839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.596847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.605578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.605601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.605610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.614623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.614646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.614655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.623600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.623623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.623632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.632048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.632070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.632079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.641032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.641055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.641064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.649770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.649792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.649800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.074 [2024-07-15 11:38:58.658972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.074 [2024-07-15 11:38:58.658995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.074 [2024-07-15 11:38:58.659004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.667794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.667818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.667827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.677507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.677530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.677542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.686319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.686342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.686351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.695130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.695162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.702737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.702758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.702767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.710467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.710488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.710497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.717681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.717703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.717711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.723949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.723971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.723979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.730307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.730328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.730337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.736729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.736751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.736758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.742900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.742925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.742933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.748876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.748905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.754784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.754805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.760670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.760691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.760699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.766475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.766496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.766504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.772444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.772466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.772473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.778336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.778357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.778365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.784681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.784703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.784711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.790916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.790947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.797869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.797890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.797898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.803820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.803851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.803859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.809907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.809928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.809936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.815679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.815700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.815708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.822408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.822430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.822437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.829817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.829837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.836546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.836567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.836575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.843311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.843341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.850169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.850190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.335 [2024-07-15 11:38:58.850202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.335 [2024-07-15 11:38:58.856214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.335 [2024-07-15 11:38:58.856241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.862100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.862122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.862130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.868183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.868204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.868211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.873941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.873962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.873970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.880009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.880030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.880037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.885918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.885939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.885947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.890574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.890596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.890604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.336 [2024-07-15 11:38:58.895978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20480b0) 00:28:15.336 [2024-07-15 11:38:58.895999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.336 [2024-07-15 11:38:58.896009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.336 00:28:15.336 Latency(us) 00:28:15.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.336 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:15.336 nvme0n1 : 2.00 5101.06 637.63 0.00 0.00 3133.22 961.67 12765.27 00:28:15.336 =================================================================================================================== 00:28:15.336 Total : 5101.06 637.63 0.00 0.00 3133.22 961.67 12765.27 00:28:15.336 0 00:28:15.336 11:38:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:15.336 11:38:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:15.336 11:38:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:15.336 | .driver_specific 00:28:15.336 | .nvme_error 00:28:15.336 | .status_code 00:28:15.336 | .command_transient_transport_error' 00:28:15.336 11:38:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.594 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 329 > 0 )) 00:28:15.594 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 753176 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 753176 ']' 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 753176 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 753176 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 753176' 00:28:15.595 killing process with pid 753176 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 753176 00:28:15.595 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.595 00:28:15.595 Latency(us) 00:28:15.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.595 =================================================================================================================== 00:28:15.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.595 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 753176 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=753686 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 753686 /var/tmp/bperf.sock 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 753686 ']' 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.853 11:38:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.853 [2024-07-15 11:38:59.381433] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:15.853 [2024-07-15 11:38:59.381484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753686 ] 00:28:15.853 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.112 [2024-07-15 11:38:59.447862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.112 [2024-07-15 11:38:59.527922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.680 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.680 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:16.680 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.680 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.940 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.199 nvme0n1 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.459 11:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.459 Running I/O for 2 seconds... 00:28:17.459 [2024-07-15 11:39:00.895905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ee5c8 00:28:17.459 [2024-07-15 11:39:00.896700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.896732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.906482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190fac10 00:28:17.459 [2024-07-15 11:39:00.907644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.907671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.915131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.915318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.915339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.924693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.924869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.924887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.934393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.934568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.934586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.943947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.944111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.944129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.953502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.953666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.953684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.962970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.963132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.963151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.972447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.972611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.972630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.981918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.982079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.982097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:00.991393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:00.991560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:00.991578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.000902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.001068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.001086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.010585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.010753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.010771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.020155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.020328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.029651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.029816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.039112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.039309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.039328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.459 [2024-07-15 11:39:01.048645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.459 [2024-07-15 11:39:01.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.459 [2024-07-15 11:39:01.048826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.718 [2024-07-15 11:39:01.058294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.718 [2024-07-15 11:39:01.058460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.718 [2024-07-15 11:39:01.058477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.718 [2024-07-15 11:39:01.067784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.718 [2024-07-15 11:39:01.067948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.718 [2024-07-15 11:39:01.067966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.077320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.077499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.077517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.086918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.087102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.096612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.096793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.096811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.106215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.106403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.106422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.115840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.116006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.116025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.125470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.125633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.125650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.135006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.135168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.135185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.144604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.144765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.144782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.154167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.154342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.154363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.163929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.164094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.164112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.173440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.173638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.183091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.183279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.183298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.192762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.192942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.192962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.202534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.202707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.202725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.212312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.212492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.212511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.222080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.222268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.222287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.231797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.231981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.241542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.241727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.241746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.250998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.251187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.251206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.260687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.260870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.260889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.270404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.270587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.270606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.279929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.280108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.280127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.289457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.289638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.289657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.298908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.299090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.299109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.719 [2024-07-15 11:39:01.308464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.719 [2024-07-15 11:39:01.308644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.719 [2024-07-15 11:39:01.308663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.318030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.318208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.318232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.327470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.327651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.327670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.336983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.337163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.337181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.346665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.346847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.346868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.356295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.356473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.365770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.365949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.365968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.375216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.375406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.375425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.384636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.384807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.384825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.394049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.394230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.403486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.403664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.403687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.413296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.413484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.413504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.422990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.423173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.423194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.432446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.432628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.432646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.441925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.442106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.442126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.451431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.451622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.451641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.460946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.461125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.461145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.470420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.470599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.470617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.479867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.480046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.480064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.489371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.489554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.980 [2024-07-15 11:39:01.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.980 [2024-07-15 11:39:01.499075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.980 [2024-07-15 11:39:01.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.499278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.508767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.508945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.508964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.518327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.518506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.518525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.527774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.527953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.527972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.537238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.537418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.537437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.546691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.546861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.546880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.556219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.556391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.556411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.981 [2024-07-15 11:39:01.565688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:17.981 [2024-07-15 11:39:01.565851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.981 [2024-07-15 11:39:01.565870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.575406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.575572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.575591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.584980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.585146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.585164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.594579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.594752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.594772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.604261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.604457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.613757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.613933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.613952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.623216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.623400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.623419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.632832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.633014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.642275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.642453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.642472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.651752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.651937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.651959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.661280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.661457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.661476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.670860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.671043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.680575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.680753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.680772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.690125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.690313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.690332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.699648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.699826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.699845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.709152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.241 [2024-07-15 11:39:01.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.241 [2024-07-15 11:39:01.709358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.241 [2024-07-15 11:39:01.718647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.718826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.718846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.728119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.728342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.737623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.737806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.737825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.747085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.747263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.747281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.756675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.756860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.766289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.766489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.775755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.775934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.775953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.785209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.785412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.794843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.795023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.795042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.804305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.804486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.804506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.813758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.813938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.813957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.242 [2024-07-15 11:39:01.823207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.242 [2024-07-15 11:39:01.823413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.242 [2024-07-15 11:39:01.823433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.832946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.833131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.833150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.842629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.842812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.842830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.852259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.852443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.852463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.861799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.861995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.871282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.871462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.880730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.880908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.880927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.890161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.890349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.890368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.899609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.899787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.899807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.909041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.909221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.909245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.918626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.918804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.918823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.928276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.928457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.928477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.937948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.938125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.947536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.947720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.947739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.957122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.957309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.957327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.966719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.966896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.966914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.976296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.976478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.976496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.985789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.985971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.985990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:01.995209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:01.995396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:01.995415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:02.004697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:02.004876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:02.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:02.014147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:02.014332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:02.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:02.023625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:02.023804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:02.023823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.502 [2024-07-15 11:39:02.033035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.502 [2024-07-15 11:39:02.033213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.502 [2024-07-15 11:39:02.033237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.042587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.042764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.042783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.052184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.052374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.052394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.061832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.062011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.071394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.071575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.071593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.081006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.081192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.081212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.503 [2024-07-15 11:39:02.090642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.503 [2024-07-15 11:39:02.090825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.503 [2024-07-15 11:39:02.090844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.100292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.100477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.100496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.110003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.110187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.110207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.119702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.119882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.119901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.129284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.129469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.129488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.139011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.139195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.139213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.148742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.148943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.158465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.158650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.168106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.168291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.168310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.177706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.177888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.177907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.187329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.187511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.187530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.197015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.197199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.197219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.206778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.206976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.206995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.216798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.217003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.217023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.226760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.226968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.236673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.236854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.246400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.246584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.246603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.256193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.256387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.256407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.265932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.266114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.266133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.275640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.275824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.275843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.285332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.285532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.295030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.295208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.295232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.304680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.304863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.304883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.314382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.314562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.314581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.324024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.324212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.324236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.333647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.333831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.333850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.343263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.343442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.343462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.763 [2024-07-15 11:39:02.352942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:18.763 [2024-07-15 11:39:02.353127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.763 [2024-07-15 11:39:02.353146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.362636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.362815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.372318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.372515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.372534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.382042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.382221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.382245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.391657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.391841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.391862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.401276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.401474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.023 [2024-07-15 11:39:02.411051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.023 [2024-07-15 11:39:02.411238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.023 [2024-07-15 11:39:02.411274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.420787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.420968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.420986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.430427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.430607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.430625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.440115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.440326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.449881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.450062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.450079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.459578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.459764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.459783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.469236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.469438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.469456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.478875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.479056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.479073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.488515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.488701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.488722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.498145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.498350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.498369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.507841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.508019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.508037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.517483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.517668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.527080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.527264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.527282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.536737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.536920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.536938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.546405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.546588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.546605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.556044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.556232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.556250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.565677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.565861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.565880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.575348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.575547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.575564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.584956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.585135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.585153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.594535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.594720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.594737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.604211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.024 [2024-07-15 11:39:02.604403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.024 [2024-07-15 11:39:02.604421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.024 [2024-07-15 11:39:02.613922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.614108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.614127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.623655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.623856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.633310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.633490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.633508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.643041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.643230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.643247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.652707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.652908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.662393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.662578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.662596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.672089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.672277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.672294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.681680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.681861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.691341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.691534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.691552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.701028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.701245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.710698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.710883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.710901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.720408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.720590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.720607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.730041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.730220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.739659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.739849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.739870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.749235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.749413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.749430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.758928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.759109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.759127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.768585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.768762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.768779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.778199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.778404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.778424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.787842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.788020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.788037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.797581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.797762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.807272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.807460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.807477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.817103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.817310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.826767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.826954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.826971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.836379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.284 [2024-07-15 11:39:02.836558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.284 [2024-07-15 11:39:02.836575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.284 [2024-07-15 11:39:02.846058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.285 [2024-07-15 11:39:02.846241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.285 [2024-07-15 11:39:02.846258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.285 [2024-07-15 11:39:02.855785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.285 [2024-07-15 11:39:02.855970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.285 [2024-07-15 11:39:02.855988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.285 [2024-07-15 11:39:02.865426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.285 [2024-07-15 11:39:02.865623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.285 [2024-07-15 11:39:02.865642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.544 [2024-07-15 11:39:02.875179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.544 [2024-07-15 11:39:02.875371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-15 11:39:02.875389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.544 [2024-07-15 11:39:02.884873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.544 [2024-07-15 11:39:02.885055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-15 11:39:02.885073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.544 [2024-07-15 11:39:02.894447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb4d0) with pdu=0x2000190ef6a8 00:28:19.544 [2024-07-15 11:39:02.894631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-15 11:39:02.894648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.544 00:28:19.544 Latency(us) 00:28:19.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.544 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.544 nvme0n1 : 2.00 26596.17 103.89 0.00 0.00 4804.34 1980.33 10143.83 00:28:19.544 =================================================================================================================== 00:28:19.544 Total : 26596.17 103.89 0.00 0.00 4804.34 1980.33 10143.83 00:28:19.544 0 00:28:19.544 11:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.544 11:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.544 11:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.544 | .driver_specific 00:28:19.544 | .nvme_error 00:28:19.544 | .status_code 00:28:19.544 | .command_transient_transport_error' 00:28:19.544 11:39:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 753686 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 753686 ']' 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 753686 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.544 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 753686 00:28:19.811 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:19.811 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:19.811 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 753686' 00:28:19.811 killing process with pid 753686 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 753686 00:28:19.812 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.812 00:28:19.812 Latency(us) 00:28:19.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.812 =================================================================================================================== 00:28:19.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 753686 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=754482 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 754482 /var/tmp/bperf.sock 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 754482 ']' 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.812 11:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.812 [2024-07-15 11:39:03.381959] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:19.812 [2024-07-15 11:39:03.382008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754482 ] 00:28:19.812 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.812 Zero copy mechanism will not be used. 00:28:20.070 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.070 [2024-07-15 11:39:03.449668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.070 [2024-07-15 11:39:03.527518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.639 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.639 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:20.639 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.639 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.898 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.157 nvme0n1 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.157 11:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.420 Zero copy mechanism will not be used. 00:28:21.420 Running I/O for 2 seconds... 00:28:21.420 [2024-07-15 11:39:04.811311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.811702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.811733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.817452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.817829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.817853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.824696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.825061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.825084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.832166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.832559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.832581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.838597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.838990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.839011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.844660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.845060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.850295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.850675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.850696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.855512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.855905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.855927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.860756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.861127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.861148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.865785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.866143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.870850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.871216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.871245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.876062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.876445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.876466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.881403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.881776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.881796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.886510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.886877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.420 [2024-07-15 11:39:04.886898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.420 [2024-07-15 11:39:04.891567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.420 [2024-07-15 11:39:04.891931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.891951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.896896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.897258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.897278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.901978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.902353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.902374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.907464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.907838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.907858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.912619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.912979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.913000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.917933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.918306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.923060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.923438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.923459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.928034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.928405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.928425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.933558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.933926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.933947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.939343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.939708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.944738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.945115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.945151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.951422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.951831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.959388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.959764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.959784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.966740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.967096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.967121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.973517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.973897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.973918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.979461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.979843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.979863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.985370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.985757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.985777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.991329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.991732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.991753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:04.997967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:04.998375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:04.998395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.421 [2024-07-15 11:39:05.004184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.421 [2024-07-15 11:39:05.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.421 [2024-07-15 11:39:05.004600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.010423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.010820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.016472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.016843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.016863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.022536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.022892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.022912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.029048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.029440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.029460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.034826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.035198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.035219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.040617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.040994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.041014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.045978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.046378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.051265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.051642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.051662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.056385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.056775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.056795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.061568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.061938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.061959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.066973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.067359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.067380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.072844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.073221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.073248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.078309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.078710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.078730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.084232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.084610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.084631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.089830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.090199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.090219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.095158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.095556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.095575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.100154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.100528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.100548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.105100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.105478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.105498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.110168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.110542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.110564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.115247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.115627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.115650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.120747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.121122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.121143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.125844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.683 [2024-07-15 11:39:05.126214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.683 [2024-07-15 11:39:05.126239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.683 [2024-07-15 11:39:05.130837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.131211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.131237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.135900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.136293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.140946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.141329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.141349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.146031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.146405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.146426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.151127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.151500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.151520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.156223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.156603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.156623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.162141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.162540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.167482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.167858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.167879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.172565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.172933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.172955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.177561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.177939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.177960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.182557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.182930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.187634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.187987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.188007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.192639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.193013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.193033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.197686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.198056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.198077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.202673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.203026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.203045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.208042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.208414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.208434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.213134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.213521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.213542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.218160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.218532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.218553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.223189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.223559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.223578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.228350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.228713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.233290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.233670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.233691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.238240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.238610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.238629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.243411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.243763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.243782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.248389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.248764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.248787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.253441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.253821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.253840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.258518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.258878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.263494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.263873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.263893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.684 [2024-07-15 11:39:05.268561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.684 [2024-07-15 11:39:05.268922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.684 [2024-07-15 11:39:05.268943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.273471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.273854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.273875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.278957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.279363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.279384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.283901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.284282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.284302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.288846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.289235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.294059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.294438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.294459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.301115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.301498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.301518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.307654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.308031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.313978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.314365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.314385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.319846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.320228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.320248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.325116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.945 [2024-07-15 11:39:05.325495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.945 [2024-07-15 11:39:05.325515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.945 [2024-07-15 11:39:05.330910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.331288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.331308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.336461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.336836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.336855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.342245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.342630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.342653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.348174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.348709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.348730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.354726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.355097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.355117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.361029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.361410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.361431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.367363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.367739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.367759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.374170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.374559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.374579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.381396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.381784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.388788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.389191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.389211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.395503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.395873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.395893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.403548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.403950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.403970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.412479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.412884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.412904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.421991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.422369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.422389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.431071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.431469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.431489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.439919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.440327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.447636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.448022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.448041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.456444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.456818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.456838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.464940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.465340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.472696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.473068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.473087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.480873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.481285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.481305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.489701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.490093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.490112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.498325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.498696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.498715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.506812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.507205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.507231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.515789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.516182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.516201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.525398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.525769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.525788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.946 [2024-07-15 11:39:05.533400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:21.946 [2024-07-15 11:39:05.533803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.946 [2024-07-15 11:39:05.533823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.541858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.542272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.549102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.549518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.549541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.556237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.556622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.556642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.562638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.563002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.563022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.569549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.569962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.575865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.576237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.576257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.582051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.582426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.582446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.588548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.588948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.595525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.595904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.595924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.602635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.603034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.603054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.609804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.610187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.610206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.617270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.617666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.617686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.624402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.624811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.631566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.631943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.631963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.638423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.638797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.638817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.644772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.645152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.645172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.650560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.650939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.656441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.656811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.662318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.662680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.668140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.668524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.668544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.673900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.674261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.674281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.679459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.679836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.679857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.685026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.685390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.685410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.691142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.691539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.691560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.698061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.698444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.698465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.705134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.705544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.712022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.712406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.718092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.207 [2024-07-15 11:39:05.718465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.207 [2024-07-15 11:39:05.718488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.207 [2024-07-15 11:39:05.723658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.724028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.724047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.729311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.729690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.729709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.735636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.736008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.736027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.741493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.741881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.747406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.747765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.747784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.753413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.753798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.753817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.759245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.759626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.759646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.764896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.765293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.765312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.770621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.771001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.771021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.777064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.777287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.777307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.783505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.783864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.783883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.789386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.789739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.208 [2024-07-15 11:39:05.795011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.208 [2024-07-15 11:39:05.795367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.208 [2024-07-15 11:39:05.795387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.468 [2024-07-15 11:39:05.800294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.468 [2024-07-15 11:39:05.800639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.468 [2024-07-15 11:39:05.800659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.468 [2024-07-15 11:39:05.805149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.468 [2024-07-15 11:39:05.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.805521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.809966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.810327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.815025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.815363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.815384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.820347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.820689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.820708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.825156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.825510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.825530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.829981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.830335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.830354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.834810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.835161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.835180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.839588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.839931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.839951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.844335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.844705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.849039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.849393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.849412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.853773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.854117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.854136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.858754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.859096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.864648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.864981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.865002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.869579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.869925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.874424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.874773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.874791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.879197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.879557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.879577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.883965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.884326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.884345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.888763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.889109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.889128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.893537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.893886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.893906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.898275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.898615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.898634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.902950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.903295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.903316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.907702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.908042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.908062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.912407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.912742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.912761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.917106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.917468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.917487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.921831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.922179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.922199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.926544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.926865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.926885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.931265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.931621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.931640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.936006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.936346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.940688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.469 [2024-07-15 11:39:05.941057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.469 [2024-07-15 11:39:05.945608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.469 [2024-07-15 11:39:05.945952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.945971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.950645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.950988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.956301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.956703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.956723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.963699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.964190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.964209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.970779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.971150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.977445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.977798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.977816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.984971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.985382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:05.993330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:05.993818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:05.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.001692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.002167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.002187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.010395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.010863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.018472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.018933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.018952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.026958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.027424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.027444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.035650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.036107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.036126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.044196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.044667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.044687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.470 [2024-07-15 11:39:06.053187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.470 [2024-07-15 11:39:06.053590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.470 [2024-07-15 11:39:06.053611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.061547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.062041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.062061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.070155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.070622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.077980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.078424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.078444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.086102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.086595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.086613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.094611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.095098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.095118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.103388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.103883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.103902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.111842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.112249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.112286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.120214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.120675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.127324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.127759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.127778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.135001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.135422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.135442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.142458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.142907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.142931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.149927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.150367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.150386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.157538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.158060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.158080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.165447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.165866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.165885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.172994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.173448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.173468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.729 [2024-07-15 11:39:06.180255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.729 [2024-07-15 11:39:06.180657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-07-15 11:39:06.180677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.187664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.188103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.188122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.195609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.196094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.196114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.203281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.203729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.203749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.211419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.211870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.211890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.218953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.219418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.219437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.226560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.227028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.234382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.234810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.234831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.242308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.242757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.242777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.250364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.250815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.258923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.259339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.259359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.267321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.267751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.267771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.275799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.276222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.276246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.284161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.284603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.284623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.290323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.290690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.290710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.295558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.295925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.295945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.300775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.301127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.301147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.305580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.305933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.305954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.310451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.310829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.730 [2024-07-15 11:39:06.315189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.730 [2024-07-15 11:39:06.315551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-07-15 11:39:06.315571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.989 [2024-07-15 11:39:06.320111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.989 [2024-07-15 11:39:06.320465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.989 [2024-07-15 11:39:06.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.989 [2024-07-15 11:39:06.326305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.989 [2024-07-15 11:39:06.326717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.989 [2024-07-15 11:39:06.326741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.989 [2024-07-15 11:39:06.332779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.989 [2024-07-15 11:39:06.333127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.333146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.339244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.339682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.339702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.346830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.347274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.347294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.354675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.355101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.355120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.362173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.362624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.362644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.370380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.370840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.377928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.378391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.378411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.385374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.385870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.385890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.393373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.393863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.401272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.401785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.409028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.409508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.409528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.416873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.417347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.417368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.424512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.424938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.424959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.432357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.432803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.432824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.438406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.438775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.438794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.443913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.444272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.444291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.449195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.449569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.449593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.455608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.455959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.455980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.460871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.461222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.466341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.466695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.466715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.472194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.472531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.472550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.478406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.478749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.484429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.484776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.484795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.490477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.490811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.490831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.495977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.496323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.496343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.501025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.501375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.501396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.505822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.506137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.506157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.510612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.510938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.510959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.515374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.515701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.515720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.520122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.520457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.520476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.524760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.525082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.525102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.529309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.529642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.529661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.533943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.534281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.534302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.538603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.538932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.538953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.543190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.543518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.543538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.547792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.548120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.548140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.990 [2024-07-15 11:39:06.552479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.990 [2024-07-15 11:39:06.552810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.990 [2024-07-15 11:39:06.552830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.991 [2024-07-15 11:39:06.557141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.991 [2024-07-15 11:39:06.557462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-07-15 11:39:06.557482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.991 [2024-07-15 11:39:06.561868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.991 [2024-07-15 11:39:06.562202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-07-15 11:39:06.562221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.991 [2024-07-15 11:39:06.566867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.991 [2024-07-15 11:39:06.567196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-07-15 11:39:06.567217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-07-15 11:39:06.572548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.991 [2024-07-15 11:39:06.572873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-07-15 11:39:06.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.991 [2024-07-15 11:39:06.577385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:22.991 [2024-07-15 11:39:06.577725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-07-15 11:39:06.577745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.582093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.582428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.582453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.587167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.587496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.587517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.592102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.592427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.592447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.596759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.597100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.601402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.601734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.601753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.606069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.606395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.606416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.610697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.611013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.611033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.615695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.616026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.616046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.620365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.620702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.620721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.625348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.625679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.625699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.630464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.630774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.630794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.635790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.636115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.636135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.642349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.642681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.642700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.647950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.648287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.648307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.652998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.653316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.658365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.658700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.658720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.663269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.663598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.663618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.668565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.668893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.668912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.674514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.674869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.674895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.680302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.680615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.680635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.252 [2024-07-15 11:39:06.685648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.252 [2024-07-15 11:39:06.685977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.252 [2024-07-15 11:39:06.685997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.690871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.691191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.691211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.696336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.696655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.696675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.701173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.701514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.701534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.706426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.706743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.706763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.712461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.712786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.712806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.718579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.718940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.724292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.724620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.724640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.729964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.730346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.736046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.736377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.736397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.742213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.742612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.742632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.748387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.748727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.748747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.753620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.753947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.758517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.758860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.763366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.763707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.763727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.768172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.768500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.768520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.772853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.773184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.777573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.777893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.777912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.782240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.782575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.782595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.787087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.787415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.787436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.791723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.792045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.792065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.796350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.796688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.796708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.801024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.801359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.801380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.253 [2024-07-15 11:39:06.805688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abb810) with pdu=0x2000190fef90 00:28:23.253 [2024-07-15 11:39:06.806007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.253 [2024-07-15 11:39:06.806033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.253 00:28:23.253 Latency(us) 00:28:23.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:23.253 nvme0n1 : 2.00 5116.88 639.61 0.00 0.00 3122.91 2208.28 9346.00 00:28:23.253 =================================================================================================================== 00:28:23.253 Total : 5116.88 639.61 0.00 0.00 3122.91 2208.28 9346.00 00:28:23.253 0 00:28:23.253 11:39:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.253 11:39:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.253 11:39:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.253 | .driver_specific 00:28:23.253 | .nvme_error 00:28:23.253 | .status_code 00:28:23.253 | .command_transient_transport_error' 00:28:23.253 11:39:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 330 > 0 )) 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 754482 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 754482 ']' 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 754482 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 754482 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 754482' 00:28:23.513 killing process with pid 754482 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 754482 00:28:23.513 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.513 00:28:23.513 Latency(us) 00:28:23.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.513 =================================================================================================================== 00:28:23.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.513 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 754482 00:28:23.812 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 752236 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 752236 ']' 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 752236 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 752236 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 752236' 00:28:23.813 killing process with pid 752236 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 752236 00:28:23.813 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 752236 00:28:24.079 00:28:24.079 real 0m16.928s 00:28:24.079 user 0m32.555s 00:28:24.079 sys 0m4.483s 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.079 ************************************ 00:28:24.079 END TEST nvmf_digest_error 00:28:24.079 ************************************ 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.079 rmmod nvme_tcp 00:28:24.079 rmmod nvme_fabrics 00:28:24.079 rmmod nvme_keyring 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 752236 ']' 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 752236 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 752236 ']' 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 752236 00:28:24.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (752236) - No such process 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 752236 is not found' 00:28:24.079 Process with pid 752236 is not found 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.079 11:39:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.616 11:39:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.616 00:28:26.616 real 0m42.086s 00:28:26.616 user 1m6.717s 00:28:26.616 sys 0m13.554s 00:28:26.616 11:39:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.616 11:39:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.616 ************************************ 00:28:26.616 END TEST nvmf_digest 00:28:26.616 ************************************ 00:28:26.616 11:39:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:26.616 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:26.616 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:26.616 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:26.616 11:39:09 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:26.616 11:39:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:26.616 11:39:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.616 11:39:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.616 ************************************ 00:28:26.616 START TEST nvmf_bdevperf 00:28:26.616 ************************************ 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:26.616 * Looking for test storage... 00:28:26.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:26.616 11:39:09 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.617 11:39:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:31.889 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:31.889 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:31.889 Found net devices under 0000:86:00.0: cvl_0_0 00:28:31.889 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:31.890 Found net devices under 0000:86:00.1: cvl_0_1 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.890 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.148 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.148 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:32.149 00:28:32.149 --- 10.0.0.2 ping statistics --- 00:28:32.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.149 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:28:32.149 00:28:32.149 --- 10.0.0.1 ping statistics --- 00:28:32.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.149 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=759077 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 759077 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 759077 ']' 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.149 11:39:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.149 [2024-07-15 11:39:15.594064] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:32.149 [2024-07-15 11:39:15.594107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.149 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.149 [2024-07-15 11:39:15.663543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:32.407 [2024-07-15 11:39:15.743908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.407 [2024-07-15 11:39:15.743944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.407 [2024-07-15 11:39:15.743951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.407 [2024-07-15 11:39:15.743958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.407 [2024-07-15 11:39:15.743963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.407 [2024-07-15 11:39:15.744073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.407 [2024-07-15 11:39:15.744179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.407 [2024-07-15 11:39:15.744180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 [2024-07-15 11:39:16.448849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 Malloc0 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:32.975 [2024-07-15 11:39:16.511897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.975 { 00:28:32.975 "params": { 00:28:32.975 "name": "Nvme$subsystem", 00:28:32.975 "trtype": "$TEST_TRANSPORT", 00:28:32.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.975 "adrfam": "ipv4", 00:28:32.975 "trsvcid": "$NVMF_PORT", 00:28:32.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.975 "hdgst": ${hdgst:-false}, 00:28:32.975 "ddgst": ${ddgst:-false} 00:28:32.975 }, 00:28:32.975 "method": "bdev_nvme_attach_controller" 00:28:32.975 } 00:28:32.975 EOF 00:28:32.975 )") 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:32.975 11:39:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:32.975 "params": { 00:28:32.975 "name": "Nvme1", 00:28:32.975 "trtype": "tcp", 00:28:32.975 "traddr": "10.0.0.2", 00:28:32.975 "adrfam": "ipv4", 00:28:32.975 "trsvcid": "4420", 00:28:32.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.975 "hdgst": false, 00:28:32.975 "ddgst": false 00:28:32.975 }, 00:28:32.975 "method": "bdev_nvme_attach_controller" 00:28:32.975 }' 00:28:32.975 [2024-07-15 11:39:16.560187] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:32.975 [2024-07-15 11:39:16.560237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759133 ] 00:28:33.234 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.234 [2024-07-15 11:39:16.627757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.234 [2024-07-15 11:39:16.701598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.493 Running I/O for 1 seconds... 00:28:34.429 00:28:34.429 Latency(us) 00:28:34.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.429 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:34.429 Verification LBA range: start 0x0 length 0x4000 00:28:34.429 Nvme1n1 : 1.00 10986.98 42.92 0.00 0.00 11608.29 2037.31 13962.02 00:28:34.429 =================================================================================================================== 00:28:34.429 Total : 10986.98 42.92 0.00 0.00 11608.29 2037.31 13962.02 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=759380 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.689 { 00:28:34.689 "params": { 00:28:34.689 "name": "Nvme$subsystem", 00:28:34.689 "trtype": "$TEST_TRANSPORT", 00:28:34.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.689 "adrfam": "ipv4", 00:28:34.689 "trsvcid": "$NVMF_PORT", 00:28:34.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.689 "hdgst": ${hdgst:-false}, 00:28:34.689 "ddgst": ${ddgst:-false} 00:28:34.689 }, 00:28:34.689 "method": "bdev_nvme_attach_controller" 00:28:34.689 } 00:28:34.689 EOF 00:28:34.689 )") 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:34.689 11:39:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:34.689 "params": { 00:28:34.689 "name": "Nvme1", 00:28:34.689 "trtype": "tcp", 00:28:34.689 "traddr": "10.0.0.2", 00:28:34.689 "adrfam": "ipv4", 00:28:34.689 "trsvcid": "4420", 00:28:34.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.689 "hdgst": false, 00:28:34.689 "ddgst": false 00:28:34.689 }, 00:28:34.689 "method": "bdev_nvme_attach_controller" 00:28:34.689 }' 00:28:34.689 [2024-07-15 11:39:18.138169] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:34.689 [2024-07-15 11:39:18.138215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759380 ] 00:28:34.689 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.689 [2024-07-15 11:39:18.205900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.689 [2024-07-15 11:39:18.275962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.948 Running I/O for 15 seconds... 00:28:38.237 11:39:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 759077 00:28:38.237 11:39:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:38.237 [2024-07-15 11:39:21.109854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.237 [2024-07-15 11:39:21.109896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.109916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.109935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.109943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.109951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.109958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.109967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.109980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.109989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.109997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.237 [2024-07-15 11:39:21.110118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.237 [2024-07-15 11:39:21.110128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.238 [2024-07-15 11:39:21.110897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.238 [2024-07-15 11:39:21.110905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.110987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.110995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.239 [2024-07-15 11:39:21.111404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.239 [2024-07-15 11:39:21.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.239 [2024-07-15 11:39:21.111535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.239 [2024-07-15 11:39:21.111542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.240 [2024-07-15 11:39:21.111981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.111988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef5c70 is same with the state(5) to be set 00:28:38.240 [2024-07-15 11:39:21.111996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.240 [2024-07-15 11:39:21.112001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.240 [2024-07-15 11:39:21.112008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104168 len:8 PRP1 0x0 PRP2 0x0 00:28:38.240 [2024-07-15 11:39:21.112016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.240 [2024-07-15 11:39:21.112057] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ef5c70 was disconnected and freed. reset controller. 00:28:38.240 [2024-07-15 11:39:21.114879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.240 [2024-07-15 11:39:21.114934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.240 [2024-07-15 11:39:21.115424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.240 [2024-07-15 11:39:21.115441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.240 [2024-07-15 11:39:21.115448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.240 [2024-07-15 11:39:21.115627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.240 [2024-07-15 11:39:21.115805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.240 [2024-07-15 11:39:21.115813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.240 [2024-07-15 11:39:21.115821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.240 [2024-07-15 11:39:21.118668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.240 [2024-07-15 11:39:21.128199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.240 [2024-07-15 11:39:21.128502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.240 [2024-07-15 11:39:21.128519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.240 [2024-07-15 11:39:21.128527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.240 [2024-07-15 11:39:21.128700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.240 [2024-07-15 11:39:21.128874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.240 [2024-07-15 11:39:21.128883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.240 [2024-07-15 11:39:21.128890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.240 [2024-07-15 11:39:21.131714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.240 [2024-07-15 11:39:21.141175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.240 [2024-07-15 11:39:21.141607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.240 [2024-07-15 11:39:21.141625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.240 [2024-07-15 11:39:21.141632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.240 [2024-07-15 11:39:21.141805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.240 [2024-07-15 11:39:21.141980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.240 [2024-07-15 11:39:21.141990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.141996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.144653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.154131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.154498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.154515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.154522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.154689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.154855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.154864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.154870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.157580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.167099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.167519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.167537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.167544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.167707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.167871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.167880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.167886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.170588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.180200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.180533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.180550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.180558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.180720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.180884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.180893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.180900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.183542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.193212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.193547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.193592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.193615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.194160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.194341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.194352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.194363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.197070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.206110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.206504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.206529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.206701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.206875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.206884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.206891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.209597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.218997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.219438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.219483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.219505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.220083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.220403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.220413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.220421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.223132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.231997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.232315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.232332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.232340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.232519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.232684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.232693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.232699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.235331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.244970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.245349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.245366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.245373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.245549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.245714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.241 [2024-07-15 11:39:21.245723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.241 [2024-07-15 11:39:21.245729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.241 [2024-07-15 11:39:21.248475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.241 [2024-07-15 11:39:21.257963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.241 [2024-07-15 11:39:21.258378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.241 [2024-07-15 11:39:21.258396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.241 [2024-07-15 11:39:21.258404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.241 [2024-07-15 11:39:21.258576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.241 [2024-07-15 11:39:21.258748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.258757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.258764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.261422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.270917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.271205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.271223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.271234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.271396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.271559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.271569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.271575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.274316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.283899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.284188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.284205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.284212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.284379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.284546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.284556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.284562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.287281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.296927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.297257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.297275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.297282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.297462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.297625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.297634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.297640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.300317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.309941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.310284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.310301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.310308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.310471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.310635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.310644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.310649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.313386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.322807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.323097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.323113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.323120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.323305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.323478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.323488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.323494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.326159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.335756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.336174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.336192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.336199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.336378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.336559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.336569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.336575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.339170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.348671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.349133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.349175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.349197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.349738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.349913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.349923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.349929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.352572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.361498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.361937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.361954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.361961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.362125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.362312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.362322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.362329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.365144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.374702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.375064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.375106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.375134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.375726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.375946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.375956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.375963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.378818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.387673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.388092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.388109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.388116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.242 [2024-07-15 11:39:21.388284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.242 [2024-07-15 11:39:21.388448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.242 [2024-07-15 11:39:21.388457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.242 [2024-07-15 11:39:21.388463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.242 [2024-07-15 11:39:21.391124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.242 [2024-07-15 11:39:21.400478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.242 [2024-07-15 11:39:21.400910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.242 [2024-07-15 11:39:21.400952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.242 [2024-07-15 11:39:21.400974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.401570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.402097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.402106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.402113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.404743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.413418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.413878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.413924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.413948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.414544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.414806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.414818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.414824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.417469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.426358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.426771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.426788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.426795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.426960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.427123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.427133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.427139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.429835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.439178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.439603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.439669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.440159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.440347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.440357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.440363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.443031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.452020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.452429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.452453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.452617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.452780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.452789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.452795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.455489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.464839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.465266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.465282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.465290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.465453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.465616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.465625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.465631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.468333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.477712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.478157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.478198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.478220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.478817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.479036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.479046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.479052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.481768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.490596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.491044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.491086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.491108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.491702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.492181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.492191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.492196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.494823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.503515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.503933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.503975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.504004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.504468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.504643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.504652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.504659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.507352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.516371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.516818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.516860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.516882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.517300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.517476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.517486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.517493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.520155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.529387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.529730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.529747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.243 [2024-07-15 11:39:21.529754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.243 [2024-07-15 11:39:21.529917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.243 [2024-07-15 11:39:21.530080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.243 [2024-07-15 11:39:21.530089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.243 [2024-07-15 11:39:21.530095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.243 [2024-07-15 11:39:21.532792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.243 [2024-07-15 11:39:21.542235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.243 [2024-07-15 11:39:21.542599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.243 [2024-07-15 11:39:21.542642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.542663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.543145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.543340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.543354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.543360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.546030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.555174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.555531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.555549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.555556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.555719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.555882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.555892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.555898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.558598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.568202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.568512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.568529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.568535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.568698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.568861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.568871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.568877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.571567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.581219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.581639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.581680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.581702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.582101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.582279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.582288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.582295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.584915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.594185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.594556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.594572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.594579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.594742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.594905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.594915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.594920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.597623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.607070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.607525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.607568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.607590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.608169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.608628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.608639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.608645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.611298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.619919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.620287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.620303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.620311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.620474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.620637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.620646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.620652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.623447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.632812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.633259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.633276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.633283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.633462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.633640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.633650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.633656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.636273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.645708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.646149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.646165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.646172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.646359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.646532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.646541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.646548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.649207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.658639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.659051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.659067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.659074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.659243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.659431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.659440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.659446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.662122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.671556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.244 [2024-07-15 11:39:21.671989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.244 [2024-07-15 11:39:21.672005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.244 [2024-07-15 11:39:21.672012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.244 [2024-07-15 11:39:21.672174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.244 [2024-07-15 11:39:21.672363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.244 [2024-07-15 11:39:21.672374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.244 [2024-07-15 11:39:21.672383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.244 [2024-07-15 11:39:21.675141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.244 [2024-07-15 11:39:21.684361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.684774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.684791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.684798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.684961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.685124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.685133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.685140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.687830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.697259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.697756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.697798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.697821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.698412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.698942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.698952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.698958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.701599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.710122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.710481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.710497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.710504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.710668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.710831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.710841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.710847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.713535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.723027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.723355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.723374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.723381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.723544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.723708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.723717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.723723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.726420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.735893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.736332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.736376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.736398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.736623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.736787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.736797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.736803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.739495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.748686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.749138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.749180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.749202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.749743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.749917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.749927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.749933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.752569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.761633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.762047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.762063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.762070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.762238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.762431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.762441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.762447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.765109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.774549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.774906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.774961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.774983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.775577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.775820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.775829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.775835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.778573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.787455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.787821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.787837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.787844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.788008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.788173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.788183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.788189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.790881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.800337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.800759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.800776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.800783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.800956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.801131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.801140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.801147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.803798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.245 [2024-07-15 11:39:21.813290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.245 [2024-07-15 11:39:21.813720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.245 [2024-07-15 11:39:21.813737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.245 [2024-07-15 11:39:21.813744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.245 [2024-07-15 11:39:21.813908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.245 [2024-07-15 11:39:21.814071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.245 [2024-07-15 11:39:21.814080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.245 [2024-07-15 11:39:21.814087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.245 [2024-07-15 11:39:21.816736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.826346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.826784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.826801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.826808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.826981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.827154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.827164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.827171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.829922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.839392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.839758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.839775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.839782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.839955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.840129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.840139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.840145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.842845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.852334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.852792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.852809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.852818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.852982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.853146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.853155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.853160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.855809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.865318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.865768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.865809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.865831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.866426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.867011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.867036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.867057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.869727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.878194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.878592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.878608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.878615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.878778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.878941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.878950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.878956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.881813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.532 [2024-07-15 11:39:21.891147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.532 [2024-07-15 11:39:21.891604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.532 [2024-07-15 11:39:21.891621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.532 [2024-07-15 11:39:21.891628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.532 [2024-07-15 11:39:21.891801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.532 [2024-07-15 11:39:21.891975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.532 [2024-07-15 11:39:21.891987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.532 [2024-07-15 11:39:21.891993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.532 [2024-07-15 11:39:21.894634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.904046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.904516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.904581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.905100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.905286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.905297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.905303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.907973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.916866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.917299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.917315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.917322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.917485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.917648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.917657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.917664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.920263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.929883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.930290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.930332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.930355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.930904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.931068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.931077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.931084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.933779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.942758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.943189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.943195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.943385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.943558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.943568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.943575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.946239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.955585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.956026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.956070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.956092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.956546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.956720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.956730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.956736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.959494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.968471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.968904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.968920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.968927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.969090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.969259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.969269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.969291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.971968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.981445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.981851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.981867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.981875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.982041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.982205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.982214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.982220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.984918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:21.994451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:21.994928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:21.994970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:21.994992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:21.995427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:21.995593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:21.995602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:21.995608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:21.998378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:22.007386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:22.007815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:22.007832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:22.007839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:22.008001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:22.008165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:22.008174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:22.008180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:22.010888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:22.020279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:22.020727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:22.020770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:22.020791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:22.021332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.533 [2024-07-15 11:39:22.021507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.533 [2024-07-15 11:39:22.021516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.533 [2024-07-15 11:39:22.021526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.533 [2024-07-15 11:39:22.024184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.533 [2024-07-15 11:39:22.033171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.533 [2024-07-15 11:39:22.033620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.533 [2024-07-15 11:39:22.033662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.533 [2024-07-15 11:39:22.033683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.533 [2024-07-15 11:39:22.034242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.034434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.034442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.034448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.037112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.046084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.046511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.046553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.046575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.047153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.047695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.047705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.047711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.050353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.059023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.059452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.059475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.059639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.059801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.059811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.059817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.062515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.071825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.072256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.072271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.072279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.072442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.072605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.072614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.072620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.075359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.084724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.085155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.085171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.085178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.085367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.085542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.085552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.085558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.088217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.097609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.098036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.098052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.098059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.098221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.098415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.098425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.098432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.101093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.534 [2024-07-15 11:39:22.110636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.534 [2024-07-15 11:39:22.111044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.534 [2024-07-15 11:39:22.111059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.534 [2024-07-15 11:39:22.111066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.534 [2024-07-15 11:39:22.111239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.534 [2024-07-15 11:39:22.111427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.534 [2024-07-15 11:39:22.111436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.534 [2024-07-15 11:39:22.111443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.534 [2024-07-15 11:39:22.114108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.123652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.124094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.124110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.124117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.124304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.124478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.124488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.124494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.127221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.136577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.137033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.137078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.137099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.137602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.137776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.137785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.137791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.140635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.149599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.149980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.149997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.150006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.150178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.150358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.150368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.150378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.152994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.162579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.163028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.163070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.163092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.163641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.163815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.163824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.163830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.166475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.175396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.175831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.175873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.175895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.176489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.176941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.176950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.176956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.179552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.188217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.188553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.188570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.188577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.188739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.188902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.188911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.188918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.191518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.201013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.201447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.201466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.201473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.201636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.201800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.795 [2024-07-15 11:39:22.201809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.795 [2024-07-15 11:39:22.201815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.795 [2024-07-15 11:39:22.204511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.795 [2024-07-15 11:39:22.213816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.795 [2024-07-15 11:39:22.214251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.795 [2024-07-15 11:39:22.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.795 [2024-07-15 11:39:22.214318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.795 [2024-07-15 11:39:22.214714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.795 [2024-07-15 11:39:22.214879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.214888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.214894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.217594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.226633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.227076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.227120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.227141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.227730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.227905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.227915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.227921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.230560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.239543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.239912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.239935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.240097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.240270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.240280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.240286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.242892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.252391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.252844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.252886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.252907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.253500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.254046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.254056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.254062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.256693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.265334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.265788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.265830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.265850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.266444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.267012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.267021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.267027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.269660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.278334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.278774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.278816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.278838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.279308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.279475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.279484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.279490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.282133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.291146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.291591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.291633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.291655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.292117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.292304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.292313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.292319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.294991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.304015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.304444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.304461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.304469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.304662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.304842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.304853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.304860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.307696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.316933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.317346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.317390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.317412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.317945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.796 [2024-07-15 11:39:22.318110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.796 [2024-07-15 11:39:22.318119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.796 [2024-07-15 11:39:22.318125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.796 [2024-07-15 11:39:22.320817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.796 [2024-07-15 11:39:22.329924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.796 [2024-07-15 11:39:22.330371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.796 [2024-07-15 11:39:22.330414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.796 [2024-07-15 11:39:22.330444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.796 [2024-07-15 11:39:22.331002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.797 [2024-07-15 11:39:22.331166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.797 [2024-07-15 11:39:22.331175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.797 [2024-07-15 11:39:22.331181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.797 [2024-07-15 11:39:22.333884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.797 [2024-07-15 11:39:22.342832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.797 [2024-07-15 11:39:22.343178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.797 [2024-07-15 11:39:22.343195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.797 [2024-07-15 11:39:22.343202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.797 [2024-07-15 11:39:22.343371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.797 [2024-07-15 11:39:22.343537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.797 [2024-07-15 11:39:22.343546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.797 [2024-07-15 11:39:22.343552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.797 [2024-07-15 11:39:22.346261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.797 [2024-07-15 11:39:22.355900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.797 [2024-07-15 11:39:22.356276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.797 [2024-07-15 11:39:22.356294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.797 [2024-07-15 11:39:22.356302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.797 [2024-07-15 11:39:22.356464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.797 [2024-07-15 11:39:22.356630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.797 [2024-07-15 11:39:22.356639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.797 [2024-07-15 11:39:22.356645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.797 [2024-07-15 11:39:22.359323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.797 [2024-07-15 11:39:22.368885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.797 [2024-07-15 11:39:22.369219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.797 [2024-07-15 11:39:22.369240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.797 [2024-07-15 11:39:22.369246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.797 [2024-07-15 11:39:22.369409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.797 [2024-07-15 11:39:22.369573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.797 [2024-07-15 11:39:22.369585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.797 [2024-07-15 11:39:22.369592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.797 [2024-07-15 11:39:22.372264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.797 [2024-07-15 11:39:22.381948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.797 [2024-07-15 11:39:22.382382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.797 [2024-07-15 11:39:22.382426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:38.797 [2024-07-15 11:39:22.382448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:38.797 [2024-07-15 11:39:22.383026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:38.797 [2024-07-15 11:39:22.383313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.797 [2024-07-15 11:39:22.383323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.797 [2024-07-15 11:39:22.383329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.058 [2024-07-15 11:39:22.386112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.058 [2024-07-15 11:39:22.395117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.058 [2024-07-15 11:39:22.395569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.058 [2024-07-15 11:39:22.395588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.058 [2024-07-15 11:39:22.395595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.058 [2024-07-15 11:39:22.395774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.058 [2024-07-15 11:39:22.395960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.058 [2024-07-15 11:39:22.395970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.058 [2024-07-15 11:39:22.395976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.058 [2024-07-15 11:39:22.398810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.058 [2024-07-15 11:39:22.408089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.058 [2024-07-15 11:39:22.408431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.058 [2024-07-15 11:39:22.408448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.058 [2024-07-15 11:39:22.408455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.058 [2024-07-15 11:39:22.408617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.058 [2024-07-15 11:39:22.408781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.058 [2024-07-15 11:39:22.408790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.058 [2024-07-15 11:39:22.408796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.058 [2024-07-15 11:39:22.411649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.058 [2024-07-15 11:39:22.420972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.058 [2024-07-15 11:39:22.421334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.058 [2024-07-15 11:39:22.421352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.058 [2024-07-15 11:39:22.421360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.058 [2024-07-15 11:39:22.421524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.058 [2024-07-15 11:39:22.421688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.058 [2024-07-15 11:39:22.421698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.058 [2024-07-15 11:39:22.421704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.058 [2024-07-15 11:39:22.424478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.058 [2024-07-15 11:39:22.433927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.434313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.434358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.434380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.434959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.435564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.435574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.435580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.438185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.446832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.447268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.447311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.447333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.447733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.447898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.447907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.447914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.450549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.459871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.460312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.460330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.460338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.460514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.460697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.460707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.460713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.463402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.472718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.473138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.473155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.473164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.473342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.473525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.473535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.473541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.476236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.485740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.486103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.486120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.486126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.486295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.486460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.486469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.486475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.489140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.498741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.499103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.499145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.499168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.499662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.499827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.499837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.499847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.502551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.511725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.512060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.512077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.512084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.512253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.512416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.512425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.512431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.515127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.524692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.525120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.525136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.525143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.525330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.525503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.525513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.525519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.528185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.537539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.537983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.538000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.538007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.538179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.538360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.538371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.538377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.540993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.550493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.550800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.550817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.550824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.550996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.551168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.551177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.551184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.553820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.059 [2024-07-15 11:39:22.563492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.059 [2024-07-15 11:39:22.563890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.059 [2024-07-15 11:39:22.563906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.059 [2024-07-15 11:39:22.563914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.059 [2024-07-15 11:39:22.564086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.059 [2024-07-15 11:39:22.564265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.059 [2024-07-15 11:39:22.564276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.059 [2024-07-15 11:39:22.564282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.059 [2024-07-15 11:39:22.566907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.576511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.576898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.576916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.576923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.577086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.577271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.577284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.577290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.579962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.589441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.589787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.589823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.589847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.590448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.590942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.590952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.590958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.593602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.602308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.602618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.602635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.602642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.602815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.602989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.602999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.603006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.605649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.615316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.615630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.615647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.615656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.615819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.615984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.615993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.615999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.618646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.628297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.628662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.628679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.628686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.628848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.629012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.629021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.629030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.631676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.060 [2024-07-15 11:39:22.641309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.060 [2024-07-15 11:39:22.641609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.060 [2024-07-15 11:39:22.641645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.060 [2024-07-15 11:39:22.641651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.060 [2024-07-15 11:39:22.641815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.060 [2024-07-15 11:39:22.641978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.060 [2024-07-15 11:39:22.641988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.060 [2024-07-15 11:39:22.641994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.060 [2024-07-15 11:39:22.644726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.654373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.321 [2024-07-15 11:39:22.654702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.321 [2024-07-15 11:39:22.654719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.321 [2024-07-15 11:39:22.654727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.321 [2024-07-15 11:39:22.654899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.321 [2024-07-15 11:39:22.655075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.321 [2024-07-15 11:39:22.655085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.321 [2024-07-15 11:39:22.655091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.321 [2024-07-15 11:39:22.657959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.667414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.321 [2024-07-15 11:39:22.667838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.321 [2024-07-15 11:39:22.667854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.321 [2024-07-15 11:39:22.667861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.321 [2024-07-15 11:39:22.668023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.321 [2024-07-15 11:39:22.668186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.321 [2024-07-15 11:39:22.668196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.321 [2024-07-15 11:39:22.668202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.321 [2024-07-15 11:39:22.670856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.680383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.321 [2024-07-15 11:39:22.680691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.321 [2024-07-15 11:39:22.680712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.321 [2024-07-15 11:39:22.680719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.321 [2024-07-15 11:39:22.680892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.321 [2024-07-15 11:39:22.681065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.321 [2024-07-15 11:39:22.681075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.321 [2024-07-15 11:39:22.681081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.321 [2024-07-15 11:39:22.683792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.693233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.321 [2024-07-15 11:39:22.693548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.321 [2024-07-15 11:39:22.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.321 [2024-07-15 11:39:22.693573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.321 [2024-07-15 11:39:22.693745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.321 [2024-07-15 11:39:22.693918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.321 [2024-07-15 11:39:22.693928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.321 [2024-07-15 11:39:22.693934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.321 [2024-07-15 11:39:22.696583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.706235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.321 [2024-07-15 11:39:22.706608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.321 [2024-07-15 11:39:22.706624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.321 [2024-07-15 11:39:22.706631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.321 [2024-07-15 11:39:22.706794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.321 [2024-07-15 11:39:22.706958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.321 [2024-07-15 11:39:22.706967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.321 [2024-07-15 11:39:22.706973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.321 [2024-07-15 11:39:22.709617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.321 [2024-07-15 11:39:22.719259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.719584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.719626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.719648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.720194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.720392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.720402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.720408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.723141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.732196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.732573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.732589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.732596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.732758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.732922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.732931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.732938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.735573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.745218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.745585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.745601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.745608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.745769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.745934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.745943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.745949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.748597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.758254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.758613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.758629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.758636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.758799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.758963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.758972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.758978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.761678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.771340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.771634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.771675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.771697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.772186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.772356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.772366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.772372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.775078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.784514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.784899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.784941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.784963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.785432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.785607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.785617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.785623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.788375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.797623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.798063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.798080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.798087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.798264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.798438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.798447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.798453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.801061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.810535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.810993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.811035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.811063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.811589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.811764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.811773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.811780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.814424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.322 [2024-07-15 11:39:22.823412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.322 [2024-07-15 11:39:22.823818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.322 [2024-07-15 11:39:22.823835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.322 [2024-07-15 11:39:22.823841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.322 [2024-07-15 11:39:22.824004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.322 [2024-07-15 11:39:22.824168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.322 [2024-07-15 11:39:22.824177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.322 [2024-07-15 11:39:22.824183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.322 [2024-07-15 11:39:22.826877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.836260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.836630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.836672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.836694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.837194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.837364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.837375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.837381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.840016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.849187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.849622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.849648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.849820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.849996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.850010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.850016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.852658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.862099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.862478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.862496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.862503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.862675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.862850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.862859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.862865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.865514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.874962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.875329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.875347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.875356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.875528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.875700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.875710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.875717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.878392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.887925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.888367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.888410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.888432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.889011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.889614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.889625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.889631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.892325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.323 [2024-07-15 11:39:22.901016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.323 [2024-07-15 11:39:22.901451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.323 [2024-07-15 11:39:22.901468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.323 [2024-07-15 11:39:22.901475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.323 [2024-07-15 11:39:22.901638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.323 [2024-07-15 11:39:22.901802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.323 [2024-07-15 11:39:22.901811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.323 [2024-07-15 11:39:22.901817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.323 [2024-07-15 11:39:22.904560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.914118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.914553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.914571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.914579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.914756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.914936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.914947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.914954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.917737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.927044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.927468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.927485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.927493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.927664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.927838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.927848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.927854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.930593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.939885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.940267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.940311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.940333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.940619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.940784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.940793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.940799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.943424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.952890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.953339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.953357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.953364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.953527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.953690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.953700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.953706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.956308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.965885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.966323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.966340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.966347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.966510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.966673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.966682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.966688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.969382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.978760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.979187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.979203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.979210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.584 [2024-07-15 11:39:22.979379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.584 [2024-07-15 11:39:22.979544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.584 [2024-07-15 11:39:22.979553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.584 [2024-07-15 11:39:22.979562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.584 [2024-07-15 11:39:22.982335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.584 [2024-07-15 11:39:22.991638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.584 [2024-07-15 11:39:22.992066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.584 [2024-07-15 11:39:22.992082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.584 [2024-07-15 11:39:22.992089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:22.992275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:22.992450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:22.992460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:22.992466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:22.995130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.004562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.005029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.005070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.005092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.005598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.005764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.005774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.005780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.008408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.017506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.017887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.017928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.017951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.018539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.018752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.018762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.018768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.021453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.030408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.030819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.030834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.030840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.031003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.031166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.031176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.031181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.033878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.043404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.043785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.043827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.043849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.044439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.044976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.044985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.044991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.047695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.056339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.056729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.056773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.056796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.057263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.057429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.057439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.057445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.060162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.069268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.069700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.069759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.069921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.070088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.070097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.070103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.072807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.082158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.082633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.082676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.082699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.083240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.083430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.083440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.083446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.086111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.095104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.095529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.095545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.095552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.095715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.095878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.095887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.095893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.098645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.107955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.108325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.108367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.108389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.108820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.108984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.108994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.109000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.111742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.120871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.121278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.585 [2024-07-15 11:39:23.121294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.585 [2024-07-15 11:39:23.121301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.585 [2024-07-15 11:39:23.121464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.585 [2024-07-15 11:39:23.121627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.585 [2024-07-15 11:39:23.121636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.585 [2024-07-15 11:39:23.121642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.585 [2024-07-15 11:39:23.124338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.585 [2024-07-15 11:39:23.133692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.585 [2024-07-15 11:39:23.134106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.586 [2024-07-15 11:39:23.134122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.586 [2024-07-15 11:39:23.134129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.586 [2024-07-15 11:39:23.134314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.586 [2024-07-15 11:39:23.134489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.586 [2024-07-15 11:39:23.134498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.586 [2024-07-15 11:39:23.134505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.586 [2024-07-15 11:39:23.137169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.586 [2024-07-15 11:39:23.146536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.586 [2024-07-15 11:39:23.146960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.586 [2024-07-15 11:39:23.146977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.586 [2024-07-15 11:39:23.146983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.586 [2024-07-15 11:39:23.147146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.586 [2024-07-15 11:39:23.147334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.586 [2024-07-15 11:39:23.147345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.586 [2024-07-15 11:39:23.147352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.586 [2024-07-15 11:39:23.150020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.586 [2024-07-15 11:39:23.159405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.586 [2024-07-15 11:39:23.159792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.586 [2024-07-15 11:39:23.159835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.586 [2024-07-15 11:39:23.159864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.586 [2024-07-15 11:39:23.160346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.586 [2024-07-15 11:39:23.160520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.586 [2024-07-15 11:39:23.160531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.586 [2024-07-15 11:39:23.160537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.586 [2024-07-15 11:39:23.163374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.586 [2024-07-15 11:39:23.172384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.586 [2024-07-15 11:39:23.172838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.586 [2024-07-15 11:39:23.172879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.586 [2024-07-15 11:39:23.172901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.586 [2024-07-15 11:39:23.173378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.586 [2024-07-15 11:39:23.173552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.586 [2024-07-15 11:39:23.173561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.586 [2024-07-15 11:39:23.173567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.176351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.185322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.185756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.185772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.185779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.185942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.186106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.186116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.186122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.188875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.198148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.198618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.198661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.198683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.199194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.199389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.199399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.199406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.202071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.211056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.211477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.211521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.211543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.212121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.212565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.212575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.212582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.215291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.223910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.224359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.224403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.224425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.225002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.225527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.225537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.225543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.228204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.236809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.237173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.237189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.237196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.237385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.237558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.237568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.237574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.240236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.249712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.250085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.250102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.250109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.250287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.250460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.250469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.250476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.253190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.262700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.263124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.263140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.263147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.263335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.263510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.263519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.263526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.266277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.275543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.275990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.276034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.276056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.276574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.276748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.276758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.276764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.279412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.288477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.288846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.288917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.289377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.289552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.289562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.289569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.292230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.301408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.301816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.301851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.301874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.302428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.302604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.302613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.302619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.308861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.316177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.316716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.316758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.316781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.317374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.317828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.317841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.317850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.321909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.329092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.329546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.329563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.329571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.329743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.329916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.329928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.329934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.332687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.341896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.342340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.342385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.342407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.342953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.343351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.343370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.343384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.349640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.356893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.357418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.357449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.357704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.357960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.357972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.357981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.362050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.369953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.370375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.370414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.370438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.845 [2024-07-15 11:39:23.370973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.845 [2024-07-15 11:39:23.371145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.845 [2024-07-15 11:39:23.371155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.845 [2024-07-15 11:39:23.371162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.845 [2024-07-15 11:39:23.373914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.845 [2024-07-15 11:39:23.382869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.845 [2024-07-15 11:39:23.383317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.845 [2024-07-15 11:39:23.383359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.845 [2024-07-15 11:39:23.383382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.846 [2024-07-15 11:39:23.383857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.846 [2024-07-15 11:39:23.384022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.846 [2024-07-15 11:39:23.384032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.846 [2024-07-15 11:39:23.384038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.846 [2024-07-15 11:39:23.386730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.846 [2024-07-15 11:39:23.395807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.846 [2024-07-15 11:39:23.396239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.846 [2024-07-15 11:39:23.396255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.846 [2024-07-15 11:39:23.396262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.846 [2024-07-15 11:39:23.396425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.846 [2024-07-15 11:39:23.396588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.846 [2024-07-15 11:39:23.396597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.846 [2024-07-15 11:39:23.396603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.846 [2024-07-15 11:39:23.399338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.846 [2024-07-15 11:39:23.408683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.846 [2024-07-15 11:39:23.409094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.846 [2024-07-15 11:39:23.409131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.846 [2024-07-15 11:39:23.409154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.846 [2024-07-15 11:39:23.409682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.846 [2024-07-15 11:39:23.409856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.846 [2024-07-15 11:39:23.409866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.846 [2024-07-15 11:39:23.409872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.846 [2024-07-15 11:39:23.412805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.846 [2024-07-15 11:39:23.421701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.846 [2024-07-15 11:39:23.422053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.846 [2024-07-15 11:39:23.422071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.846 [2024-07-15 11:39:23.422078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:39.846 [2024-07-15 11:39:23.422259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:39.846 [2024-07-15 11:39:23.422433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.846 [2024-07-15 11:39:23.422442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.846 [2024-07-15 11:39:23.422449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.846 [2024-07-15 11:39:23.425073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.846 [2024-07-15 11:39:23.434686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.846 [2024-07-15 11:39:23.435166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.846 [2024-07-15 11:39:23.435209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:39.846 [2024-07-15 11:39:23.435244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.435735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.435929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.435939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.435945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.438722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.447536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.447976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.448019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.448041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.448636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.449026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.449036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.449042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.451675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.460351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.460679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.460696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.460703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.460865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.461029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.461038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.461048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.463686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.473216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.473566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.473582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.473589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.473753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.473916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.473925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.473931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.476530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.486023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.486452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.486469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.486476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.486639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.486802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.486811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.486817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.489510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.498850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.499301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.499344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.499366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.499890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.500054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.500064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.500070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.502767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.511691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.512117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.512136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.512144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.512329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.512503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.512512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.512518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.515187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.524566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.524906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.524922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.524929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.525091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.525260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.525270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.525293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.527966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.537425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.537848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.537891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.537913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.538394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.538569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.538579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.538585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.541240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.550283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.550693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.550709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.550716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.106 [2024-07-15 11:39:23.550879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.106 [2024-07-15 11:39:23.551045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.106 [2024-07-15 11:39:23.551054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.106 [2024-07-15 11:39:23.551060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.106 [2024-07-15 11:39:23.553753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.106 [2024-07-15 11:39:23.563103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.106 [2024-07-15 11:39:23.563560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.106 [2024-07-15 11:39:23.563603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.106 [2024-07-15 11:39:23.563625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.564018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.564183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.564192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.564199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.566891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.576022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.576460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.576502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.576525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.577103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.577529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.577540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.577546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.580202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.588921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.589320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.589337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.589345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.589521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.589686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.589695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.589701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.592349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.601842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.602262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.602280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.602288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.602462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.602639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.602648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.602654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.605270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.614763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.615150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.615166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.615173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.615341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.615506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.615515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.615521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.618181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.627654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.628087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.628129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.628151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.628581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.628756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.628765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.628771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.631413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.640542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.640981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.641021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.641051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.641494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.641659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.641669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.641675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.644266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.653464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.653891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.653907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.653914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.654076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.654245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.654254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.654261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.656853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.666357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.666836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.666878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.666900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.667437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.667611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.667620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.667626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.670463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.679397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.679753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.679771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.679778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.679949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.680122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.680134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.680141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.107 [2024-07-15 11:39:23.682883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.107 [2024-07-15 11:39:23.692347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.107 [2024-07-15 11:39:23.692762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.107 [2024-07-15 11:39:23.692797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.107 [2024-07-15 11:39:23.692821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.107 [2024-07-15 11:39:23.693399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.107 [2024-07-15 11:39:23.693573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.107 [2024-07-15 11:39:23.693583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.107 [2024-07-15 11:39:23.693589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.696367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.705228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.705667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.705708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.705730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.706307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.706473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.706482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.706488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.711901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.720581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.721106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.721127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.721137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.721398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.721654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.721667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.721675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.725734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.733515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.733948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.733965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.733972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.734139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.734312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.734322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.734329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.736994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.746453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.746827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.746843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.746850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.747013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.747176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.747185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.747191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.749840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.759373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.759732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.759748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.759755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.759917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.760080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.760089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.760095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.762783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.772265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.772607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.772624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.772634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.772798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.772961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.772972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.772981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.775680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.785217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.785684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.785701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.785708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.785881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.786056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.786066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.786072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.788703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.798161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.798544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.798608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.799187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.799784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.799810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.799830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.802585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.811235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.811552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.811569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.811576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.811739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.811904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.811914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.406 [2024-07-15 11:39:23.811924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.406 [2024-07-15 11:39:23.814625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.406 [2024-07-15 11:39:23.824216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.406 [2024-07-15 11:39:23.824590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.406 [2024-07-15 11:39:23.824631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.406 [2024-07-15 11:39:23.824653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.406 [2024-07-15 11:39:23.825243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.406 [2024-07-15 11:39:23.825825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.406 [2024-07-15 11:39:23.825849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.825855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.828664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.837366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.837735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.837752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.837759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.837931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.838105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.838114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.838120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.840932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.850245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.850612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.850629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.850636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.850809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.850982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.850991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.850998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.853640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.863264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.863709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.863726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.863733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.863905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.864079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.864089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.864095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.866782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.876124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.876453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.876495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.876518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.877023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.877196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.877206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.877214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.879896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.889120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.889484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.889501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.889508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.889672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.889835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.889844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.889851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.892546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.902133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.902557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.902601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.902623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.903077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.903247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.903257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.903263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.905956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.915140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.915508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.915525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.915532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.915694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.915858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.915867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.915874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.918569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.928296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.928700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.928718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.928725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.928902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.929086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.929096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.929103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.931955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.941489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.941798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.941816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.941824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.942002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.942181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.407 [2024-07-15 11:39:23.942191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.407 [2024-07-15 11:39:23.942201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.407 [2024-07-15 11:39:23.945040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.407 [2024-07-15 11:39:23.954575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.407 [2024-07-15 11:39:23.954948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.407 [2024-07-15 11:39:23.954966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.407 [2024-07-15 11:39:23.954973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.407 [2024-07-15 11:39:23.955151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.407 [2024-07-15 11:39:23.955335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.408 [2024-07-15 11:39:23.955345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.408 [2024-07-15 11:39:23.955352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.408 [2024-07-15 11:39:23.958144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.408 [2024-07-15 11:39:23.967677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.408 [2024-07-15 11:39:23.968049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.408 [2024-07-15 11:39:23.968066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.408 [2024-07-15 11:39:23.968073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.408 [2024-07-15 11:39:23.968251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.408 [2024-07-15 11:39:23.968424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.408 [2024-07-15 11:39:23.968434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.408 [2024-07-15 11:39:23.968440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.408 [2024-07-15 11:39:23.971192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.408 [2024-07-15 11:39:23.980768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.408 [2024-07-15 11:39:23.981118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.408 [2024-07-15 11:39:23.981135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.408 [2024-07-15 11:39:23.981142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.408 [2024-07-15 11:39:23.981336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.408 [2024-07-15 11:39:23.981525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.408 [2024-07-15 11:39:23.981535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.408 [2024-07-15 11:39:23.981541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.408 [2024-07-15 11:39:23.984361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.408 [2024-07-15 11:39:23.993797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.408 [2024-07-15 11:39:23.994148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.408 [2024-07-15 11:39:23.994168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.408 [2024-07-15 11:39:23.994175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.408 [2024-07-15 11:39:23.994371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.408 [2024-07-15 11:39:23.994558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.408 [2024-07-15 11:39:23.994568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.408 [2024-07-15 11:39:23.994574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:23.997409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.006785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.007186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.007203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.007210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.007389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.007563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.007572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.007579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.010342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.019803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.020138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.020155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.020162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.020349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.020523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.020533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.020539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.023339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.032885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.033232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.033251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.033258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.033436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.033617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.033627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.033633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.036468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.045952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.046306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.046324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.046332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.046504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.046678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.046688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.046695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.049456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.058915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.059327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.059345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.059352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.059535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.059699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.059708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.059714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.062335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.071904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.072272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.072315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.072337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.072917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.073459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.073469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.073475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.076212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.084848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.085245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.085289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.085311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.085892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.086487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.086513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.086542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.089207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 [2024-07-15 11:39:24.097773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.098115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.098131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.098138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.098305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.669 [2024-07-15 11:39:24.098468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.669 [2024-07-15 11:39:24.098477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.669 [2024-07-15 11:39:24.098483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.669 [2024-07-15 11:39:24.101178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 759077 Killed "${NVMF_APP[@]}" "$@" 00:28:40.669 11:39:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:40.669 11:39:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:40.669 11:39:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:40.669 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.669 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.669 [2024-07-15 11:39:24.110948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.669 [2024-07-15 11:39:24.111242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.669 [2024-07-15 11:39:24.111259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.669 [2024-07-15 11:39:24.111267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.669 [2024-07-15 11:39:24.111444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.111624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.111634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.111643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=760455 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 760455 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 760455 ']' 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.670 [2024-07-15 11:39:24.114485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.670 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.670 [2024-07-15 11:39:24.124022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.124406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.124432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.124610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.124789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.124800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.124807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.127648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.137237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.137623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.137640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.137648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.137825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.138005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.138015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.138021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.140862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.150364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.150697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.150715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.150725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.150905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.151085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.151095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.151102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.153907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.162564] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:40.670 [2024-07-15 11:39:24.162604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.670 [2024-07-15 11:39:24.163523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.163927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.163944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.163952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.164124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.164323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.164334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.164342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.167171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.176610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.177027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.177045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.177053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.177235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.177415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.177425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.177432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.180272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.189793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.670 [2024-07-15 11:39:24.190244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.190262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.190270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.190449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.190625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.190635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.190643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.193490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.202863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.203294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.203312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.203320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.203506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.203680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.203690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.203697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.206512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.215990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.216445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.216462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.216470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.216643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.216818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.216828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.216834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.219649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.229062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.229515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.229532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.229539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.229711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.229886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.229895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.229909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.232687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.233980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.670 [2024-07-15 11:39:24.242103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.242538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.242557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.242565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.242739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.242913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.242923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.670 [2024-07-15 11:39:24.242930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.670 [2024-07-15 11:39:24.245743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.670 [2024-07-15 11:39:24.255171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.670 [2024-07-15 11:39:24.255631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.670 [2024-07-15 11:39:24.255649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.670 [2024-07-15 11:39:24.255656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.670 [2024-07-15 11:39:24.255835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.670 [2024-07-15 11:39:24.256015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.670 [2024-07-15 11:39:24.256025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.671 [2024-07-15 11:39:24.256032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.258868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.268160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.268604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.268622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.268630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.268802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.268976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.268986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.268993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.271810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.281280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.281702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.281722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.281730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.281905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.282082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.282092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.282100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.284933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.294454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.294909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.294928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.294936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.295115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.295299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.295310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.295317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.298198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.307581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.307958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.307985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.308165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.308352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.308363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.308370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.311193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.314410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.931 [2024-07-15 11:39:24.314438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.931 [2024-07-15 11:39:24.314445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.931 [2024-07-15 11:39:24.314452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.931 [2024-07-15 11:39:24.314460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.931 [2024-07-15 11:39:24.314513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.931 [2024-07-15 11:39:24.314619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.931 [2024-07-15 11:39:24.314620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.931 [2024-07-15 11:39:24.320695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.321159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.321179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.321188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.321374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.321556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.321566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.321572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.324406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.333777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.334235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.334255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.334263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.334442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.334622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.334632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.334639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.337478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.346841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.347309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.347330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.347338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.931 [2024-07-15 11:39:24.347518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.931 [2024-07-15 11:39:24.347698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.931 [2024-07-15 11:39:24.347709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.931 [2024-07-15 11:39:24.347716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.931 [2024-07-15 11:39:24.350562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.931 [2024-07-15 11:39:24.359934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.931 [2024-07-15 11:39:24.360409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-07-15 11:39:24.360429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.931 [2024-07-15 11:39:24.360438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.360613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.360789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.360799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.360806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.363646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.373014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.373457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.373477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.373486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.373665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.373845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.373855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.373863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.376701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.386062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.386516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.386535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.386542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.386721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.386902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.386911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.386919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.389755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.399122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.399567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.399585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.399592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.399776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.399955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.399964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.399971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.402807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.412378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.412840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.412857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.412865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.413044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.413228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.413239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.413246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.416074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.425438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.425863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.425881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.425888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.426068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.426253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.426263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.426270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.429094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.438637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.439077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.439095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.439102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.439285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.439463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.439474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.439484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.442318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.451840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.452278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.452294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.452301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.452473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.452646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.452656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.452662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.455491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.465003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.465446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.465464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.465471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.465650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.465829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.465838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.465845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.468680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.478200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.478647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.478664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.478672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.478851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.479029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.479039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.479046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.481877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.491397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.491760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.491776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.932 [2024-07-15 11:39:24.491783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.932 [2024-07-15 11:39:24.491961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.932 [2024-07-15 11:39:24.492141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.932 [2024-07-15 11:39:24.492151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.932 [2024-07-15 11:39:24.492157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.932 [2024-07-15 11:39:24.494988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.932 [2024-07-15 11:39:24.504565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.932 [2024-07-15 11:39:24.505027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-07-15 11:39:24.505045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.933 [2024-07-15 11:39:24.505052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.933 [2024-07-15 11:39:24.505236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.933 [2024-07-15 11:39:24.505416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.933 [2024-07-15 11:39:24.505425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.933 [2024-07-15 11:39:24.505432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.933 [2024-07-15 11:39:24.508261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.933 [2024-07-15 11:39:24.517616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.933 [2024-07-15 11:39:24.517982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-07-15 11:39:24.517999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:40.933 [2024-07-15 11:39:24.518006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:40.933 [2024-07-15 11:39:24.518183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:40.933 [2024-07-15 11:39:24.518365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.933 [2024-07-15 11:39:24.518376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.933 [2024-07-15 11:39:24.518383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.193 [2024-07-15 11:39:24.521218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.193 [2024-07-15 11:39:24.530754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.193 [2024-07-15 11:39:24.531206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.193 [2024-07-15 11:39:24.531227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.193 [2024-07-15 11:39:24.531235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.193 [2024-07-15 11:39:24.531414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.193 [2024-07-15 11:39:24.531597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.193 [2024-07-15 11:39:24.531607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.193 [2024-07-15 11:39:24.531613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.193 [2024-07-15 11:39:24.534449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.193 [2024-07-15 11:39:24.543817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.193 [2024-07-15 11:39:24.544262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.193 [2024-07-15 11:39:24.544280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.193 [2024-07-15 11:39:24.544287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.193 [2024-07-15 11:39:24.544465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.193 [2024-07-15 11:39:24.544643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.193 [2024-07-15 11:39:24.544653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.193 [2024-07-15 11:39:24.544659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.193 [2024-07-15 11:39:24.547495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.193 [2024-07-15 11:39:24.556856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.193 [2024-07-15 11:39:24.557302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.193 [2024-07-15 11:39:24.557320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.193 [2024-07-15 11:39:24.557327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.193 [2024-07-15 11:39:24.557505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.193 [2024-07-15 11:39:24.557685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.193 [2024-07-15 11:39:24.557694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.193 [2024-07-15 11:39:24.557701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.193 [2024-07-15 11:39:24.560532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.193 [2024-07-15 11:39:24.570058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.193 [2024-07-15 11:39:24.570516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.193 [2024-07-15 11:39:24.570533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.193 [2024-07-15 11:39:24.570541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.193 [2024-07-15 11:39:24.570717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.193 [2024-07-15 11:39:24.570896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.193 [2024-07-15 11:39:24.570905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.193 [2024-07-15 11:39:24.570912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.193 [2024-07-15 11:39:24.573751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.193 [2024-07-15 11:39:24.583103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.583525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.583542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.583549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.583727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.583905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.583915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.583921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.586753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.596277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.596657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.596674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.596681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.596858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.597037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.597047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.597053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.599887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.609431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.609831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.609849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.609857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.610034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.610215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.610230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.610238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.613068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.622601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.622960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.622981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.622988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.623167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.623348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.623358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.623365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.626197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.635732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.636179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.636195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.636203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.636385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.636563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.636572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.636579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.639408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.648769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.649188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.649205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.649213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.649401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.649579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.649589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.649596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.652431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.661954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.662383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.662401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.662409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.662587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.662768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.662779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.662785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.665618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.675139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.675565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.675583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.675590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.675768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.675947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.675957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.675964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.678797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.688324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.688778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.688795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.688802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.688980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.689159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.689169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.689175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.692008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.701375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.701750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.701767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.701775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.701953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.702130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.702140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.194 [2024-07-15 11:39:24.702146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.194 [2024-07-15 11:39:24.704979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.194 [2024-07-15 11:39:24.714545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.194 [2024-07-15 11:39:24.714995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.194 [2024-07-15 11:39:24.715013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.194 [2024-07-15 11:39:24.715021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.194 [2024-07-15 11:39:24.715199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.194 [2024-07-15 11:39:24.715383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.194 [2024-07-15 11:39:24.715395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.715402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.195 [2024-07-15 11:39:24.718233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.195 [2024-07-15 11:39:24.727598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.195 [2024-07-15 11:39:24.728025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.195 [2024-07-15 11:39:24.728043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.195 [2024-07-15 11:39:24.728050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.195 [2024-07-15 11:39:24.728231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.195 [2024-07-15 11:39:24.728410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.195 [2024-07-15 11:39:24.728420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.728426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.195 [2024-07-15 11:39:24.731258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.195 [2024-07-15 11:39:24.740786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.195 [2024-07-15 11:39:24.741206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.195 [2024-07-15 11:39:24.741223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.195 [2024-07-15 11:39:24.741235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.195 [2024-07-15 11:39:24.741412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.195 [2024-07-15 11:39:24.741591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.195 [2024-07-15 11:39:24.741600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.741607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.195 [2024-07-15 11:39:24.744442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.195 [2024-07-15 11:39:24.753970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.195 [2024-07-15 11:39:24.754392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.195 [2024-07-15 11:39:24.754409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.195 [2024-07-15 11:39:24.754420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.195 [2024-07-15 11:39:24.754598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.195 [2024-07-15 11:39:24.754778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.195 [2024-07-15 11:39:24.754788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.754794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.195 [2024-07-15 11:39:24.757626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.195 [2024-07-15 11:39:24.767152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.195 [2024-07-15 11:39:24.767522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.195 [2024-07-15 11:39:24.767540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.195 [2024-07-15 11:39:24.767547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.195 [2024-07-15 11:39:24.767725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.195 [2024-07-15 11:39:24.767904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.195 [2024-07-15 11:39:24.767914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.767920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.195 [2024-07-15 11:39:24.770785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.195 [2024-07-15 11:39:24.780320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.195 [2024-07-15 11:39:24.780747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.195 [2024-07-15 11:39:24.780764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.195 [2024-07-15 11:39:24.780771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.195 [2024-07-15 11:39:24.780948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.195 [2024-07-15 11:39:24.781126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.195 [2024-07-15 11:39:24.781136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.195 [2024-07-15 11:39:24.781143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.783976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.793518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.793961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.793979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.793986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.794165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.794350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.794364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.794370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.797203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.806559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.807026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.807044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.807052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.807234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.807414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.807424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.807431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.810262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.819617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.820064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.820082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.820089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.820271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.820449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.820459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.820466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.823300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.832667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.833100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.833116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.833124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.833304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.833482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.833492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.833499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.836328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.845854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.846295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.846312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.846319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.846505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.846679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.846689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.846695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.849534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.858899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.859329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.859346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.859354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.859531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.859708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.859718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.859725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.862562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.872088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.872539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.872557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.456 [2024-07-15 11:39:24.872564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.456 [2024-07-15 11:39:24.872742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.456 [2024-07-15 11:39:24.872921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.456 [2024-07-15 11:39:24.872931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.456 [2024-07-15 11:39:24.872938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.456 [2024-07-15 11:39:24.875769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.456 [2024-07-15 11:39:24.885133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.456 [2024-07-15 11:39:24.885575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.456 [2024-07-15 11:39:24.885593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.885600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.885777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.885951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.885961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.885967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.888810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.898176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.898620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.898638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.898646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.898824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.899006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.899016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.899022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.901858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.911223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.911655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.911672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.911680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.911858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.912037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.912048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.912055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.914894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.924285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.924662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.924680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.924690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.924869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.925050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.925061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.925074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.927912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.937449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.937856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.937873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.937881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.938058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.938243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.938254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.938260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.941092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.950640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.951064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.951082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.951089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.951272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.951451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.951461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.951468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.954300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.963829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.964230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.964248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.964255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.964433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.964612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.964622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.964629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.967465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.457 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:41.457 11:39:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.457 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.457 11:39:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.457 [2024-07-15 11:39:24.976985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.977417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.977435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.977443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.977620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.977799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.457 [2024-07-15 11:39:24.977809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.457 [2024-07-15 11:39:24.977816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.457 [2024-07-15 11:39:24.980653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.457 [2024-07-15 11:39:24.990190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.457 [2024-07-15 11:39:24.990617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.457 [2024-07-15 11:39:24.990635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.457 [2024-07-15 11:39:24.990642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.457 [2024-07-15 11:39:24.990820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.457 [2024-07-15 11:39:24.991000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.458 [2024-07-15 11:39:24.991010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.458 [2024-07-15 11:39:24.991017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.458 [2024-07-15 11:39:24.993854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.458 [2024-07-15 11:39:25.003396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.458 [2024-07-15 11:39:25.003739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.458 [2024-07-15 11:39:25.003756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.458 [2024-07-15 11:39:25.003764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.458 [2024-07-15 11:39:25.003940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.458 [2024-07-15 11:39:25.004120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.458 [2024-07-15 11:39:25.004130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.458 [2024-07-15 11:39:25.004136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.458 [2024-07-15 11:39:25.006975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.458 [2024-07-15 11:39:25.015408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.458 [2024-07-15 11:39:25.016508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.458 [2024-07-15 11:39:25.016934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.458 [2024-07-15 11:39:25.016951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.458 [2024-07-15 11:39:25.016959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.458 [2024-07-15 11:39:25.017136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.458 [2024-07-15 11:39:25.017320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.458 [2024-07-15 11:39:25.017330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.458 [2024-07-15 11:39:25.017337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.458 [2024-07-15 11:39:25.020170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.458 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.458 [2024-07-15 11:39:25.029705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.458 [2024-07-15 11:39:25.030084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.458 [2024-07-15 11:39:25.030100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.458 [2024-07-15 11:39:25.030108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.458 [2024-07-15 11:39:25.030291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.458 [2024-07-15 11:39:25.030470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.458 [2024-07-15 11:39:25.030481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.458 [2024-07-15 11:39:25.030487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.458 [2024-07-15 11:39:25.033319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.458 [2024-07-15 11:39:25.042855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.458 [2024-07-15 11:39:25.043264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.458 [2024-07-15 11:39:25.043282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.458 [2024-07-15 11:39:25.043290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.458 [2024-07-15 11:39:25.043468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.458 [2024-07-15 11:39:25.043647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.458 [2024-07-15 11:39:25.043658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.458 [2024-07-15 11:39:25.043668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.718 [2024-07-15 11:39:25.046506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.718 Malloc0 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.718 [2024-07-15 11:39:25.056058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.718 [2024-07-15 11:39:25.056496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.718 [2024-07-15 11:39:25.056514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.718 [2024-07-15 11:39:25.056522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.718 [2024-07-15 11:39:25.056701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.718 [2024-07-15 11:39:25.056881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.718 [2024-07-15 11:39:25.056890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.718 [2024-07-15 11:39:25.056897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.718 [2024-07-15 11:39:25.059733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.718 [2024-07-15 11:39:25.069263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.718 [2024-07-15 11:39:25.069719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.718 [2024-07-15 11:39:25.069736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc4980 with addr=10.0.0.2, port=4420 00:28:41.718 [2024-07-15 11:39:25.069744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc4980 is same with the state(5) to be set 00:28:41.718 [2024-07-15 11:39:25.069921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4980 (9): Bad file descriptor 00:28:41.718 [2024-07-15 11:39:25.070100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.718 [2024-07-15 11:39:25.070110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.718 [2024-07-15 11:39:25.070116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.718 [2024-07-15 11:39:25.072955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.718 [2024-07-15 11:39:25.077992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.718 [2024-07-15 11:39:25.082318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.718 11:39:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 759380 00:28:41.718 [2024-07-15 11:39:25.150979] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:49.906 00:28:49.906 Latency(us) 00:28:49.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:49.906 Verification LBA range: start 0x0 length 0x4000 00:28:49.906 Nvme1n1 : 15.00 8023.77 31.34 12825.35 0.00 6119.66 648.24 14702.86 00:28:49.906 =================================================================================================================== 00:28:49.906 Total : 8023.77 31.34 12825.35 0.00 6119.66 648.24 14702.86 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.164 rmmod nvme_tcp 00:28:50.164 rmmod nvme_fabrics 00:28:50.164 rmmod nvme_keyring 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 760455 ']' 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 760455 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 760455 ']' 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 760455 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.164 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 760455 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 760455' 00:28:50.423 killing process with pid 760455 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 760455 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 760455 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.423 11:39:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.958 11:39:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.958 00:28:52.958 real 0m26.322s 00:28:52.958 user 1m2.528s 00:28:52.958 sys 0m6.485s 00:28:52.958 11:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:52.958 11:39:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.958 ************************************ 00:28:52.958 END TEST nvmf_bdevperf 00:28:52.958 ************************************ 00:28:52.958 11:39:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:52.958 11:39:36 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:52.958 11:39:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:52.958 11:39:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.958 11:39:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:52.958 ************************************ 00:28:52.958 START TEST nvmf_target_disconnect 00:28:52.958 ************************************ 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:52.958 * Looking for test storage... 00:28:52.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:52.958 11:39:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:52.959 11:39:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.235 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.235 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:58.235 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:58.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:58.494 00:28:58.494 --- 10.0.0.2 ping statistics --- 00:28:58.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.494 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:58.494 00:28:58.494 --- 10.0.0.1 ping statistics --- 00:28:58.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.494 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:58.494 ************************************ 00:28:58.494 START TEST nvmf_target_disconnect_tc1 00:28:58.494 ************************************ 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:58.494 11:39:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.494 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.752 [2024-07-15 11:39:42.087826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-07-15 11:39:42.087871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a31e60 with addr=10.0.0.2, port=4420 00:28:58.752 [2024-07-15 11:39:42.087894] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:58.752 [2024-07-15 11:39:42.087904] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:58.752 [2024-07-15 11:39:42.087910] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:58.752 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:58.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:58.752 Initializing NVMe Controllers 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:58.752 00:28:58.752 real 0m0.114s 00:28:58.752 user 0m0.055s 00:28:58.752 sys 0m0.059s 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.752 ************************************ 00:28:58.752 END TEST nvmf_target_disconnect_tc1 00:28:58.752 ************************************ 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:58.752 ************************************ 00:28:58.752 START TEST nvmf_target_disconnect_tc2 00:28:58.752 ************************************ 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=765452 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 765452 00:28:58.752 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 765452 ']' 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.753 11:39:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.753 [2024-07-15 11:39:42.225860] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:28:58.753 [2024-07-15 11:39:42.225906] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.753 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.753 [2024-07-15 11:39:42.300398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.010 [2024-07-15 11:39:42.381743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.010 [2024-07-15 11:39:42.381777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.010 [2024-07-15 11:39:42.381785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.010 [2024-07-15 11:39:42.381791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.010 [2024-07-15 11:39:42.381796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.010 [2024-07-15 11:39:42.381903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:59.010 [2024-07-15 11:39:42.381931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:59.010 [2024-07-15 11:39:42.382221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:59.010 [2024-07-15 11:39:42.382222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 Malloc0 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 [2024-07-15 11:39:43.114219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 [2024-07-15 11:39:43.146443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=765703 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:59.577 11:39:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.833 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.745 11:39:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 765452 00:29:01.745 11:39:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 [2024-07-15 11:39:45.173985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 [2024-07-15 11:39:45.174191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Read completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.745 Write completed with error (sct=0, sc=8) 00:29:01.745 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 [2024-07-15 11:39:45.174396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Write completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 Read completed with error (sct=0, sc=8) 00:29:01.746 starting I/O failed 00:29:01.746 [2024-07-15 11:39:45.174591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.746 [2024-07-15 11:39:45.174877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.174895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.175010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.175022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.175198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.175239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.175403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.175436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.175635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.175665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.175863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.175893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.176075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.176106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.176318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.176329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.176503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.176533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.176672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.176702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.176906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.176937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.177133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.177144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.177260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.177291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.177532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.177563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.177847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.177877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.178079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.178109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.178302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.178334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.178486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.178517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.178748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.178777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.178925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.178955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.179090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.179120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.179318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.179329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.746 [2024-07-15 11:39:45.179432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.746 [2024-07-15 11:39:45.179465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.746 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.179743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.179780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.179864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.179876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.180049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.180079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.180284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.180316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.180532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.180562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.180702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.180732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.180933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.180964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.181218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.181259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.181458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.181489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.181619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.181649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.181854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.181884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.182070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.182306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.182338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.182458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.182489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.182632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.182662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.182863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.182894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.183853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.183864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.184031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.184061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.184263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.184294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.184434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.184471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.184753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.184992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.185023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.185135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.185149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.185326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.185361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.185549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.185579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.185736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.185766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.186030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.186061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.186264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.186296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.186429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.186459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.186683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.186714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.186905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.186935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.187120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.187149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.187298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.187330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.187449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.747 [2024-07-15 11:39:45.187479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.747 qpair failed and we were unable to recover it. 00:29:01.747 [2024-07-15 11:39:45.187730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.187761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.187958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.187988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.188207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.188246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.188392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.188692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.188723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.188858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.188889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.189952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.189965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.190928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.190943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.191040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.191053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.191137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.191238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.191251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.191427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.191457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.191736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.191770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.192030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.192061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.192206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.192256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.192393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.192430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.192647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.192677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.192879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.192910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.193042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.193073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.193288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.193320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.193524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.193555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.193745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.748 [2024-07-15 11:39:45.193776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.748 qpair failed and we were unable to recover it. 00:29:01.748 [2024-07-15 11:39:45.193912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.193941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.194130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.194161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.194350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.194583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.194615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.194871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.194901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.195034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.195064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.195269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.195300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.195594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.195624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.195918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.195948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.196143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.196173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.196431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.196464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.196675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.196704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.196911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.196942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.197090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.197121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.197318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.197546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.197580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.197860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.197892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.198026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.198057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.198252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.198284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.198510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.198541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.198731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.198762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.198980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.199011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.199218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.199257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.199509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.199541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.199680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.199711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.199901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.199937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.200216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.200254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.200465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.200496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.200749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.200780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.200966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.200996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.201191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.201222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.201393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.201424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.201699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.202008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.202038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.749 qpair failed and we were unable to recover it. 00:29:01.749 [2024-07-15 11:39:45.202220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.749 [2024-07-15 11:39:45.202263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.202408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.202569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.202599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.202788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.202819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.203026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.203056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.203275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.203307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.203493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.203523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.203723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.203753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.203897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.203928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.204052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.204083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.204297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.204328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.204602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.204632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.204775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.204806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.204938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.204968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.205186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.205217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.205431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.205461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.205715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.205746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.205877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.205908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.206117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.206148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.206297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.206328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.206530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.206561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.206693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.206724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.206973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.207003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.207191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.207221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.207435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.207466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.207657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.207688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.207815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.207845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.208044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.208074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.208375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.208408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.750 [2024-07-15 11:39:45.208538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.750 [2024-07-15 11:39:45.208568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.750 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.208795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.208826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.209030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.209066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.209264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.209296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.209614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.209645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.209847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.209878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.210089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.210120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.210381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.210413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.210602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.210781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.210812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.211068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.211099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.211304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.211335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.211535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.211566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.211697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.211727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.211937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.211968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.212104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.212135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.212322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.212355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.212552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.212583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.212706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.212737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.212961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.212992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.213126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.213157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.213359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.213391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.213530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.213561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.213741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.213771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.214038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.214338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.214369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.214638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.214669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.214871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.214902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.215153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.215184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.215444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.215476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.215726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.215756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.215955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.215987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.216188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.216219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.216482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.216512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.216650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.216680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.216882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.216914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.751 [2024-07-15 11:39:45.217114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.751 [2024-07-15 11:39:45.217146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.751 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.217348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.217381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.217590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.217620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.217873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.217903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.218156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.218187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.218448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.218479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.218600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.218636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.218888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.218919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.219191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.219222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.219427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.219457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.219659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.219690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.219897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.219928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.220112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.220142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.220356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.220387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.220638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.220667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.220886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.220916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.221141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.221171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.221384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.221416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.221552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.221583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.221725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.221756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.221898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.221929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.222144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.222174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.222312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.222344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.222539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.222570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.222791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.222821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.223020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.223051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.223243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.223275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.223456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.223486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.223634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.223664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.223921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.223951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.224125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.224378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.224409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.224552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.224721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.224752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.224894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.224924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.225110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.225141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.225277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.225308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.225480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.752 [2024-07-15 11:39:45.225613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.752 [2024-07-15 11:39:45.225643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.752 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.225761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.225792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.225911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.225942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.226083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.226115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.226302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.226334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.226519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.226550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.226800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.226830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.226961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.226992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.227145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.227182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.227330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.227362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.227500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.227531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.227675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.227705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.227888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.227920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.228133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.228167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.228447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.228479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.228700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.228731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.228926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.228957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.229154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.229185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.229405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.229443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.229649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.229680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.229910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.229941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.230076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.230106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.230296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.230327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.230551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.230582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.230779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.230810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.231060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.231090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.231237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.231268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.231454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.231486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.231622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.231652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.231846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.231877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.232061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.232093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.232221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.232258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.232539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.232570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.753 qpair failed and we were unable to recover it. 00:29:01.753 [2024-07-15 11:39:45.232722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.753 [2024-07-15 11:39:45.232753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.232951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.232982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.233123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.233154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.233371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.233403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.233598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.233629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.233764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.233794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.233948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.233979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.234181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.234212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.234408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.234440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.234575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.234605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.234794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.234824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.235010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.235041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.235165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.235196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.235332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.235364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.235515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.235545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.235761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.235796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.236079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.236293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.236529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.236704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.236855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.236988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.237018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.237206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.237264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.237460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.237491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.237705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.237735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.237954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.237985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.238243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.238275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.238480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.238511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.238714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.238744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.239027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.239058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.239250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.239281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.239552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.239582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.239704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.239734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.754 qpair failed and we were unable to recover it. 00:29:01.754 [2024-07-15 11:39:45.239942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.754 [2024-07-15 11:39:45.239972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.240118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.240148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.240354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.240385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.240568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.240599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.240806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.240837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.241037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.241067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.241215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.241255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.241453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.241484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.241688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.241719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.241909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.241940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.242077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.242107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.242379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.242410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.242545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.242575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.242701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.242731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.242849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.243082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.243113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.243276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.243308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.243508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.243539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.243767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.243797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.243940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.243970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.244110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.244139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.244344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.244376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.244505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.244541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.244763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.244793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.244930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.244961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.245211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.245249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.245388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.245419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.245547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.245578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.245703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.245734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.245982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.246013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.246197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.246235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.246362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.246394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.246644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.246675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.246888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.246918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.247168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.247198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.247357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.247389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.247580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.247611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.247834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.247865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.248074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.248105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.248301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.755 [2024-07-15 11:39:45.248349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.755 qpair failed and we were unable to recover it. 00:29:01.755 [2024-07-15 11:39:45.248556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.248588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.248848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.248887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.249087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.249117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.249299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.249332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.249607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.249639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.249865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.249915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.250084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.250130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.250462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.250509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.250757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.250801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.251058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.251361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.251395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.251647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.251678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.251830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.251862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.251993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.252025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.252280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.252313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.252584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.252615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.252735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.252766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.253023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.253280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.253312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.253587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.253618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.253766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.253796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.253992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.254023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.254255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.254294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.254433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.254463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.254670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.254839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.254869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.255063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.255094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.255292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.255325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.255429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.255459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.255652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.756 [2024-07-15 11:39:45.255682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.756 qpair failed and we were unable to recover it. 00:29:01.756 [2024-07-15 11:39:45.255879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.255911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.256182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.256212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.256423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.256454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.256661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.256693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.256895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.256925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.257112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.257142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.257350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.257383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.257573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.257605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.257858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.257888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.258081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.258111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.258321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.258353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.258483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.258513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.258721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.258751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.258936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.258967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.259088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.259317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.259350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.259487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.259517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.259721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.259751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.259888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.259918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.260045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.260076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.260212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.260270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.260524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.260554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.260845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.260875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.261149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.261341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.261374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.261529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.261560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.261834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.261864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.262004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.262034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.262248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.262280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.262431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.262463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.262714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.262745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.263020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.263050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.263255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.263293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.263570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.263600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.263851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.263882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.264006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.264037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.264183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.264213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.264396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.264429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.264701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.757 [2024-07-15 11:39:45.264731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.757 qpair failed and we were unable to recover it. 00:29:01.757 [2024-07-15 11:39:45.265005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.265036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.265293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.265326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.265530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.265560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.265748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.265779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.265924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.265955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.266169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.266200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.266466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.266498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.266790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.266820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.267078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.267108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.267293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.267326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.267602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.267632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.267910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.267940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.268137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.268168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.268376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.268408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.268608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.268638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.268890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.268921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.269059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.269089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.269342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.269375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.269558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.269590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.269793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.269824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.270103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.270134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.270399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.270431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.270617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.270648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.270897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.270928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.271058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.271089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.271247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.271279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.271471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.271502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.271693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.271723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.271974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.272005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.272150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.272181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.272491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.272525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.272755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.272785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.273015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.273045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.273314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.273637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.273669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.273883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.273913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.274109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.274139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.274350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.274382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.274576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.274607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.274827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.758 [2024-07-15 11:39:45.274858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.758 qpair failed and we were unable to recover it. 00:29:01.758 [2024-07-15 11:39:45.275046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.275076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.275292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.275325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.275476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.275506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.275694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.275724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.275923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.275953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.276146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.276176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.276330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.276363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.276629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.276659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.276847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.276877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.277177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.277208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.277453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.277485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.277701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.277732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.277889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.277920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.278107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.278137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.278389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.278422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.278623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.278654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.278942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.278973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.279249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.279486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.279515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.279768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.279799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.279989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.280058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.280272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.280308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.280514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.280546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.280667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.280696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.280893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.280924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.281149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.281179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.281408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.281440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.281722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.281752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.281934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.281964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.282167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.282197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.282408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.282439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.282625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.282655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.759 qpair failed and we were unable to recover it. 00:29:01.759 [2024-07-15 11:39:45.282808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.759 [2024-07-15 11:39:45.282838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.282959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.282997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.283207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.283247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.283446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.283476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.283667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.283697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.283947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.283977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.284250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.284282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.284468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.284499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.284695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.284724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.285006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.285037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.285262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.285293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.285542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.285571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.285858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.285889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.286024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.286054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.286263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.286294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.286492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.286522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.286799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.286829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.287008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.287038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.287171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.287200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.287431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.287462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.287655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.287685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.287935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.287965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.288153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.288182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.288392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.288422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.288565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.288595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.288894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.288923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.289072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.289102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.289288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.289319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.289537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.289568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.289820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.289850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.290102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.290131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.290398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.290429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.290701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.290731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.290933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.290963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.291080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.291111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.291313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.291343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.291536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.291566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.291764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.291794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.291910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.291941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.292215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.292263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.292541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.760 [2024-07-15 11:39:45.292572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.760 qpair failed and we were unable to recover it. 00:29:01.760 [2024-07-15 11:39:45.292718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.292748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.292903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.292934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.293071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.293101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.293299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.293332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.293537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.293567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.293763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.293794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.294000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.294031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.294288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.294318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.294590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.294620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.294903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.294934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.295083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.295113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.295254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.295285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.295488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.295518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.295660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.295690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.295891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.295921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.296100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.296130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.296318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.296349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.296495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.296525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.296711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.296741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.296944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.296974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.297174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.297205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.297412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.297443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.297718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.297748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.298011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.298040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.298236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.298267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.298397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.298620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.298650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.298791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.298826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.299024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.299054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.299247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.299278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.299556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.299587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.299797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.299979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.300010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.300222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.300259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.300464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.300493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.300696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.300725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.301004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.301034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.301309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.301340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.301461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.301490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.301629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.761 [2024-07-15 11:39:45.301860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.761 [2024-07-15 11:39:45.301889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.761 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.302069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.302099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.302299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.302330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.302479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.302510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.302764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.302794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.302934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.302964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.303185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.303215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.303419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.303450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.303631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.303661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.303840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.303869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.304020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.304050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.304265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.304297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.304516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.304546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.304767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.304993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.305023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.305160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.305189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.305340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.305371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.305508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.305538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.305764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.305794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.305977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.306006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.306201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.306237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.306414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.306446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.306631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.306661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.306942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.306971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.307241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.307273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.307546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.307576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.307728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.307758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.307899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.307934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.308114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.308143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.308339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.308370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.308519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.308549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.308820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.308849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.308990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.309020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.309205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.309252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.309519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.309550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.309789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.309818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.310077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.310107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.310382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.310413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.310672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.310701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.310925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.310955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.311180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.762 [2024-07-15 11:39:45.311210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.762 qpair failed and we were unable to recover it. 00:29:01.762 [2024-07-15 11:39:45.311408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.311439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.311594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.311624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.311815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.311845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.311993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.312023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.312136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.312166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.312360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.312391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.312598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.312627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.312878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.312907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.313067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.313219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.313394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.313640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.313811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.313999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.314030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.314218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.314255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.314438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.314468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.314668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.314698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.314888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.314918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.315039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.315069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.315338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.315368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.315642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.315672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.315876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.316021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.316050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.316321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.316352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.316503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.316534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.316723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.316753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.316948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.316983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.317261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.317292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.317516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.317548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.317694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.317724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.317912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.317942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.318136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.318166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.318351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.318381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.318655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.318685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.763 qpair failed and we were unable to recover it. 00:29:01.763 [2024-07-15 11:39:45.318876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.763 [2024-07-15 11:39:45.318906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.319090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.319120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.319255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.319286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.319524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.319553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.319754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.319784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.319985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.320015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.320135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.320165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.320430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.320461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.320615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.320644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.320816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.320846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.321150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.321423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.321455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.321658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.321689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.321870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.321900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.322178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.322208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.322417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.322447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.322634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.322664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.322929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.322959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.323108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.323139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.323372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.323403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.323522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.323551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.323700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.323730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.323863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.323893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.324076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.324106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.324364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.324394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.324513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.324543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.324689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.324720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.324858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.325042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.325071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.325273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.325305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.325583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.325613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:01.764 [2024-07-15 11:39:45.325917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.764 [2024-07-15 11:39:45.325947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:01.764 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.326198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.326245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.326455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.326485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.326693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.326723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.327003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.327033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.327175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.327205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.327438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.327469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.327607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.327636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.327840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.327870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.328002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.328032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.328203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.328239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.328436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.328467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.047 [2024-07-15 11:39:45.328595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.047 [2024-07-15 11:39:45.328625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.047 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.328879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.328908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.329179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.329208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.329435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.329465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.329652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.329682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.329906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.329935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.330185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.330356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.330386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.330596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.330626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.330877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.330906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.331045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.331268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.331298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.331419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.331449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.331700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.331730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.331936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.331967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.332167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.332197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.332416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.332447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.332677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.332706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.332980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.333010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.333159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.333188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.333409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.333440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.333655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.333685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.333878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.333907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.334111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.334141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.334422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.334452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.334662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.334691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.334969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.334998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.335195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.335235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.335463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.335493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.335687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.335721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.335938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.335967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.336161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.336191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.336405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.336435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.336625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.336656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.336839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.336869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.337011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.337041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.337251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.337283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.337574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.337604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.337798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.337828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.338023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.338053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.338270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.048 [2024-07-15 11:39:45.338301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.048 qpair failed and we were unable to recover it. 00:29:02.048 [2024-07-15 11:39:45.338581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.338611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.338881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.338910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.339112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.339143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.339415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.339446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.339649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.339679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.339876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.339905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.340179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.340209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.340441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.340471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.340691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.340721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.340914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.340944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.341111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.341140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.341330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.341361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.341556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.341587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.341780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.341810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.342085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.342115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.342304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.342335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.342587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.342616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.342750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.342781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.343937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.343967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.344244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.344275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.344402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.344432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.344561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.344592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.344821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.344852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.345056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.345091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.345251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.345283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.345558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.345589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.345729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.345759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.345980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.346010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.346204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.346244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.346479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.346510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.346741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.346770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.347043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.347073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.347347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.347379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.347572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.347602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.347807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.347837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.049 [2024-07-15 11:39:45.348037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.049 [2024-07-15 11:39:45.348067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.049 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.348253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.348284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.348438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.348468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.348597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.348627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.348744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.348774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.349029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.349059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.349205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.349250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.349459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.349490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.349692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.349722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.349917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.349946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.350128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.350158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.350343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.350374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.350652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.350682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.350828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.350858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.351059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.351383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.351414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.351627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.351657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.351791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.351820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.352038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.352242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.352272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.352534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.352564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.352766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.352797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.352924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.352953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.353086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.353116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.353260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.353291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.353432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.353462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.353664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.353694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.353878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.353908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.354050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.354085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.354280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.354311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.354542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.354839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.354869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.355133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.355163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.355415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.355446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.355562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.355592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.355799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.355828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.356024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.356053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.356199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.356235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.356362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.356393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.356591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.356620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.356812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.356841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.050 qpair failed and we were unable to recover it. 00:29:02.050 [2024-07-15 11:39:45.357030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.050 [2024-07-15 11:39:45.357060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.357274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.357304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.357427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.357457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.357589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.357618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.357907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.357937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.358137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.358167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.358383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.358413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.358601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.358631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.358835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.358865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.359096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.359125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.359328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.359358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.359495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.359524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.359783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.359812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.360037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.360307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.360509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.360689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.360981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.361011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.361156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.361186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.361404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.361710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.361740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.362023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.362053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.362243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.362274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.362469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.362499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.362692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.362722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.363004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.363035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.363182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.363219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.363415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.363445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.363641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.363671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.363922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.363951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.364206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.364245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.051 qpair failed and we were unable to recover it. 00:29:02.051 [2024-07-15 11:39:45.364451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.051 [2024-07-15 11:39:45.364481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.364659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.364688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.364902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.364932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.365067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.365257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.365289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.365437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.365467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.365653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.365682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.365861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.365891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.366016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.366046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.366247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.366279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.366542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.366573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.366760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.366790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.367040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.367069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.367325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.367355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.367476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.367506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.367757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.367787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.368011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.368040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.368316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.368346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.368490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.368520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.368721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.368751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.369964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.369993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.370189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.370219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.370414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.370444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.370583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.370613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.370751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.370780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.371044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.052 [2024-07-15 11:39:45.371074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.052 qpair failed and we were unable to recover it. 00:29:02.052 [2024-07-15 11:39:45.371348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.371379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.371629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.371659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.371992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.372242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.372273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.372402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.372436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.372638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.372667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.372804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.372835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.373119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.373149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.373340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.373371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.373621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.373651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.373928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.373957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.374102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.374131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.374254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.374284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.374420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.374450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.374580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.374610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.374719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.374748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.375026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.375055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.375312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.375342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.375573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.375602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.375833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.375862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.376066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.376096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.376365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.376395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.376649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.376679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.376933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.376963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.377082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.377112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.377384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.377414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.377618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.377647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.377849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.377878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.378177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.378207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.378367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.378397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.378524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.378553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.378748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.378778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.379037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.379066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.379341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.379372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.379572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.379602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.379807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.379837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.380047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.380078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.380261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.053 [2024-07-15 11:39:45.380292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.053 qpair failed and we were unable to recover it. 00:29:02.053 [2024-07-15 11:39:45.380501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.380531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.380779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.380808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.380963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.380993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.381201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.381248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.381481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.381511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.381641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.381671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.381869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.381903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.382167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.382197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.382479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.382510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.382662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.382691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.382846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.382876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.383066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.383095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.383365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.383397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.383578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.383806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.383836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.383966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.383996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.384106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.384135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.384357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.384388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.384661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.384691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.384913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.384943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.385232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.385470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.385500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.385630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.385660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.385834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.385864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.386152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.386182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.386396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.386427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.386629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.386659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.386790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.386820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.387026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.387056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.387328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.387359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.387637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.387813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.387843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.388039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.388069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.388270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.388301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.054 qpair failed and we were unable to recover it. 00:29:02.054 [2024-07-15 11:39:45.388581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.054 [2024-07-15 11:39:45.388611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.388721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.388751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.389026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.389056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.389329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.389360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.389545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.389575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.389706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.389736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.389988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.390018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.390284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.390315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.390516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.390546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.390798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.390828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.391032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.391061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.391330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.391361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.391505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.391540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.391749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.391779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.391996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.392026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.392222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.392269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.392554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.392583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.392706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.392736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.392957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.392987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.393121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.393151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.393340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.393371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.393558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.393587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.393807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.393837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.394022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.394052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.394244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.394275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.394447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.394476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.394667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.394697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.394892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.394921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.395127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.395157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.395357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.395388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.395638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.395668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.395867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.395897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.396928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.396958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.397264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.397296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.397574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.397606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.397793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.055 [2024-07-15 11:39:45.397823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.055 qpair failed and we were unable to recover it. 00:29:02.055 [2024-07-15 11:39:45.398129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.398157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.398422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.398453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.398712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.398742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.398942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.398971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.399155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.399184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.399463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.399494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.399756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.399786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.400050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.400080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.400280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.400311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.400563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.400592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.400865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.400894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.401024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.401058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.401255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.401285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.401418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.401449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.401658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.401687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.401965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.402235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.402266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.402406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.402436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.402687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.402717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.402909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.402939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.403216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.403254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.403488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.403519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.403788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.403818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.404015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.404245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.404275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.404555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.404586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.404798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.404828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.405111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.405140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.056 qpair failed and we were unable to recover it. 00:29:02.056 [2024-07-15 11:39:45.405391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.056 [2024-07-15 11:39:45.405422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.405674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.405705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.405841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.405871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.406073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.406103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.406241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.406272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.406459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.406489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.406623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.406652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.406836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.406866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.407017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.407048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.407185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.407215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.407430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.407461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.407757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.407786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.407978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.408212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.408251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.408417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.408447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.408634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.408664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.408848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.408877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.409022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.409053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.409249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.409279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.409505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.409535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.409746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.409777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.410002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.410032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.410257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.410289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.410465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.410534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.410755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.410790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.410913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.410944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.411138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.411170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.411382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.411414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.411696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.411726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.411941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.411970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.412222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.412259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.412465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.412495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.412647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.412677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.412812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.412842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.413093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.413123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.413371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.413401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.413540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.057 [2024-07-15 11:39:45.413571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.057 qpair failed and we were unable to recover it. 00:29:02.057 [2024-07-15 11:39:45.413781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.413811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.413949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.413979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.414205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.414246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.414453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.414483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.414621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.414650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.414865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.414894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.415117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.415147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.415282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.415314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.415501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.415531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.415719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.415749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.416000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.416313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.416498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.416528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.416729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.416758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.416873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.417102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.417327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.417578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.417608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.417818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.417847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.418033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.418265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.418295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.418442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.418472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.418608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.418638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.418890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.418919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.419114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.419143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.419346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.419377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.419594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.419624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.419896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.419931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.420121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.420151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.420354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.420385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.420563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.420592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.420789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.420819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.420966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.420996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.421285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.421315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.058 qpair failed and we were unable to recover it. 00:29:02.058 [2024-07-15 11:39:45.421521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.058 [2024-07-15 11:39:45.421551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.421735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.421765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.422042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.422071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.422205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.422243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.422437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.422467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.422658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.422688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.422883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.422913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.423171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.423201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.423484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.423514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.423779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.423808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.424088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.424117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.424254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.424285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.424534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.424564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.424763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.424792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.425010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.425039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.425238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.425269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.425453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.425483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.425651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.425681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.425823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.425853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.426127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.426157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.426352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.426388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.426532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.426562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.426756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.426786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.426914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.426944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.427069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.427098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.427384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.427415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.427605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.427634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.427830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.427860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.428004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.428033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.428217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.428258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.428479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.428509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.428714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.428743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.428892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.428922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.429058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.429088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.429281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.429312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.429416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.429446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.059 [2024-07-15 11:39:45.429761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.059 [2024-07-15 11:39:45.429791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.059 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.430011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.430042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.430271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.430302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.430491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.430520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.430709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.430739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.430922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.430951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.431291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.431322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.431527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.431556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.431755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.431784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.431972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.432002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.432211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.432248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.432450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.432485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.432614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.432643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.432834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.432864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.433118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.433148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.433334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.433365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.433554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.433584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.433771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.433800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.433929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.433958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.434212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.434249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.434446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.434476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.434673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.434703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.434848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.434877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.435161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.435191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.435398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.435429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.435573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.435603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.435855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.435885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.436165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.436195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.436342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.436373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.436572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.436601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.436758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.436788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.436908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.436937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.437120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.437150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.437343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.437375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.437630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.437660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.437853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.437882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.438066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.438096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.438304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.438334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.438480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.438511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.438728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.438758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.060 [2024-07-15 11:39:45.438895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.060 [2024-07-15 11:39:45.438925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.060 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.439134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.439164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.439362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.439516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.439545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.439683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.439713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.439912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.439941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.440136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.440167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.440411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.440442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.440644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.440674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.440872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.440900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.441025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.441054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.441279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.441311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.441507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.441538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.441750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.441779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.441899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.441928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.442126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.442155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.442294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.442325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.442450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.442479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.442627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.442656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.442935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.442965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.443105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.443135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.443335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.443367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.443658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.443688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.443819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.443849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.443983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.444013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.444242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.444272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.444396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.444426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.444559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.444589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.444790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.444819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.445129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.445159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.445410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.445442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.445639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.445668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.445801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.445830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.446121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.446151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.061 qpair failed and we were unable to recover it. 00:29:02.061 [2024-07-15 11:39:45.446293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.061 [2024-07-15 11:39:45.446323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.446469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.446498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.446628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.446658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.446848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.446877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.447000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.447030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.447232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.447268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.447458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.447488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.447687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.447717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.447917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.447946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.448086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.448115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.448368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.448399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.448543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.448572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.448780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.448810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.449107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.449137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.449268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.449299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.449518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.449547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.449733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.449762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.449963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.449992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.450188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.450217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.450377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.450408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.450542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.450572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.450831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.450861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.451140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.451358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.451518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.451687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.451834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.451983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.452013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.452212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.452250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.452422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.452452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.452638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.452668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.452796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.452826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.453040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.453075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.453222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.453264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.453487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.453517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.453705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.453735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.453895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.453926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.454064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.454094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.454261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.454421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.454452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.454641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.062 [2024-07-15 11:39:45.454671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.062 qpair failed and we were unable to recover it. 00:29:02.062 [2024-07-15 11:39:45.454820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.454850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.455068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.455098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.455349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.455379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.455524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.455553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.455766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.455795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.456032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.456061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.456242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.456273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.456546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.456575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.456705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.456736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.456886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.456916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.457107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.457137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.457335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.457365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.457498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.457527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.457664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.457694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.457829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.457860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.458044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.458073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.458274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.458306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.458512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.458542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.458669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.458698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.458843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.458873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.459036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.459290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.459448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.459613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.459769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.459971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.460124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.460298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.460456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.460671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.460886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.460916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.461110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.461140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.461340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.461371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.461516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.461546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.461766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.461796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.461995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.462025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.462185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.462215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.462433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.462463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.462586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.462616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.462868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.462897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.463016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.463046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.063 qpair failed and we were unable to recover it. 00:29:02.063 [2024-07-15 11:39:45.463185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.063 [2024-07-15 11:39:45.463215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.463424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.463454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.463642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.463671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.463823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.464069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.464098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.464309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.464340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.464558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.464769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.464798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.464980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.465010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.465204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.465241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.465430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.465460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.465739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.465768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.465959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.465989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.466124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.466154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.466301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.466332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.466524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.466555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.466685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.466715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.466995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.467025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.467242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.467279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.467422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.467453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.467584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.467614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.467794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.467824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.468076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.468106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.468306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.468337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.468522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.468552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.468836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.468866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.469154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.469394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.469431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.469621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.469650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.469787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.469817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.470025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.470056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.470306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.470337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.470533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.470563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.470712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.470742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.470881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.470910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.471111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.471148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.471339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.471370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.471526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.471557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.471682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.471712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.471859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.471889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.472085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.472115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.064 [2024-07-15 11:39:45.472265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.064 [2024-07-15 11:39:45.472296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.064 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.472567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.472596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.472789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.473004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.473034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.473297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.473332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.473527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.473557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.473684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.473713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.473902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.473931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.474208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.474273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.474410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.474439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.474622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.474652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.474838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.474868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.475082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.475112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.475297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.475328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.475521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.475685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.475717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.477874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.477932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.478214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.478262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.478526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.478558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.478809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.478839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.479052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.479083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.479222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.479265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.479382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.479413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.479624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.479654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.479880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.479912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.480173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.480203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.480341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.480371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.480566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.480597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.480745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.480775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.480915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.480946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.481068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.481099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.481333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.481373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.481513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.481542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.481744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.481775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.481971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.482000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.482261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.482293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.482488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.482725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.065 [2024-07-15 11:39:45.482756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.065 qpair failed and we were unable to recover it. 00:29:02.065 [2024-07-15 11:39:45.482949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.482980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.483238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.483270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.483528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.483558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.483776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.483807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.484037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.484067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.484193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.484223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.484442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.484472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.484614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.484644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.484864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.484894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.485117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.485147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.485278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.485310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.485507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.485539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.485754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.485785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.485934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.485964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.486242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.486273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.486465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.486505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.486706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.486735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.486934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.486964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.487125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.487155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.487370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.487401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.487527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.487556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.487682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.487712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.487852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.487882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.488008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.488038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.488252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.488284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.488479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.488509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.488726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.488757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.488896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.488926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.489149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.489178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.489373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.489403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.489626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.489656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.489791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.489822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.489962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.489992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.490148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.490392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.490424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.490692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.490727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.490929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.490961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.491096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.491127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.491268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.066 [2024-07-15 11:39:45.491299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.066 qpair failed and we were unable to recover it. 00:29:02.066 [2024-07-15 11:39:45.491505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.491534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.491746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.491777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.491916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.491946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.492096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.492127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.492265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.492296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.492555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.492585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.492873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.494480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.494533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.494777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.494809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.495011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.495042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.495250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.495285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.495504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.495535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.495638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.495668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.495951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.495982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.496188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.496218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.496352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.496382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.496503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.496533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.496671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.496701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.496978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.497177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.497206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.497343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.497373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.497626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.497663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.497851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.497886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.498100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.498129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.498323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.498354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.498481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.498511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.498713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.498743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.498876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.498905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.499158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.499187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.499313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.499344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.499528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.499558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.499806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.499837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.500134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.500165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.500302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.500333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.500530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.500560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.500685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.500715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.500956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.500986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.501186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.501216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.501450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.501481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.501665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.501695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.501824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.501853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.067 qpair failed and we were unable to recover it. 00:29:02.067 [2024-07-15 11:39:45.502040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.067 [2024-07-15 11:39:45.502069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.502191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.502221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.502376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.502406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.502558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.502588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.502778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.502808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.502997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.503027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.503240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.503271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.503391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.503421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.503609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.503644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.503834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.503864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.504050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.504079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.504208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.504248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.504387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.504419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.504622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.504785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.504815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.505013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.505043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.505235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.505266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.505410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.505439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.505643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.505673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.505867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.505897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.506104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.506136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.506320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.506351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.506553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.506583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.506783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.506812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.506948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.506977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.507185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.507214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.507513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.507544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.507731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.507761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.507961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.507990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.508152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.508182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.508383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.508414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.508544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.508574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.508791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.508820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.509022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.509052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.509266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.509297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.509426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.509456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.509694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.509724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.509920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.509950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.510202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.510243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.510400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.510430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.510570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.510600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.510785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.068 [2024-07-15 11:39:45.510815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.068 qpair failed and we were unable to recover it. 00:29:02.068 [2024-07-15 11:39:45.510938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.510969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.511113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.511143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.511339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.511370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.511495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.511525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.511719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.511749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.511884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.511914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.512120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.512151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.512348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.512385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.512514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.512545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.512822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.512853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.513046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.513076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.513288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.513319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.513457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.513487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.513691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.513721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.513911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.513941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.514130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.514160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.514375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.514406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.514595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.514626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.514762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.514793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.514994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.515026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.515235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.515266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.515487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.515518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.515719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.515749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.515934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.515965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.516104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.516134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.516282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.516314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.516497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.516527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.516711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.516741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.516875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.516905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.517158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.517188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.517323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.517354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.517652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.517681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.517807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.517837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.518038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.518068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.518346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.518382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.518647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.518676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.518797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.518828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.519033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.519069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.519326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.519357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.519550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.519580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.519714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.069 [2024-07-15 11:39:45.519745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.069 qpair failed and we were unable to recover it. 00:29:02.069 [2024-07-15 11:39:45.520013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.520183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.520338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.520554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.520695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.520913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.520942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.521126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.521156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.521419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.521451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.521656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.521686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.521818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.521848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.522036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.522066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.522369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.522399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.522584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.522614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.522766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.522795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.522993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.523023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.523302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.523579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.523608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.523822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.523852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.524064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.524094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.524294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.524326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.524472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.524506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.524641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.524671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.524863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.524893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.525104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.525133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.525333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.525364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.525521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.525551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.525810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.525840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.526054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.526084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.526358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.526388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.526637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.526667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.526923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.526953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.527084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.527114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.527272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.527303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.527555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.527584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.527843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.527874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.070 qpair failed and we were unable to recover it. 00:29:02.070 [2024-07-15 11:39:45.528105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.070 [2024-07-15 11:39:45.528135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.528344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.528374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.528568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.528597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.528742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.528771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.529013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.529043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.529194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.529223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.529376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.529407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.529598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.529628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.529814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.529844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.530028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.530059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.530246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.530542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.530572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.530707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.530741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.530956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.530986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.531179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.531209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.531443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.531680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.531710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.531828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.531859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.531997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.532027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.532243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.532275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.532528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.532560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.532692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.532912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.532943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.533061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.533092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.533280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.533311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.533512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.533543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.533688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.533719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.533938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.533968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.534097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.534127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.534274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.534305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.534493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.534523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.534724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.534754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.534938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.534967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.535123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.535263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.535550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.535725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.535875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.535992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.536023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.536212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.536250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.536398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.071 [2024-07-15 11:39:45.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.071 qpair failed and we were unable to recover it. 00:29:02.071 [2024-07-15 11:39:45.536559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.536719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.536750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.536871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.536902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.537076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.537106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.537314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.537344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.537467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.537496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.537718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.537748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.537946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.537976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.538108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.538139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.538366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.538397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.538582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.538612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.538799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.538828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.539088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.539156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.539380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.539417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.539643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.539675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.539864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.539895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.540091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.540122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.540319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.540350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.540543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.540573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.540699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.540934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.540965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.541148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.541178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.541393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.541425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.541582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.541612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.541746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.541777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.542111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.542149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.542372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.542403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.542606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.542638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.542861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.542891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.543145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.543176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.543404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.543436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.543716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.543746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.543880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.543910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.544117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.544147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.544291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.544322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.544458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.544488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.544687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.544717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.544931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.544961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.545159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.545189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.545341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.545373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.545551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.072 [2024-07-15 11:39:45.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.072 qpair failed and we were unable to recover it. 00:29:02.072 [2024-07-15 11:39:45.545770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.545800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.545955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.545985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.546173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.546203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.546407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.546438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.546639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.546669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.546859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.546889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.547034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.547065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.547196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.547238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.547453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.547484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.547615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.547645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.547840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.547870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.548055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.548090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.548299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.548330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.548462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.548493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.548642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.548672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.548869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.549093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.549123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.549401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.549432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.549615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.549645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.549787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.549816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.549973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.550003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.550253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.550284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.550584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.550614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.550756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.550788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.551061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.551092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.551295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.551326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.551511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.551541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.551793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.551823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.552035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.552066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.552361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.552391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.552564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.552595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.552850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.552881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.553021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.553051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.553183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.553214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.553379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.553409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.553664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.553694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.553885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.553915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.554043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.554073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.554299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.554331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.554523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.554554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.554737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.073 [2024-07-15 11:39:45.554767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.073 qpair failed and we were unable to recover it. 00:29:02.073 [2024-07-15 11:39:45.554992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.555023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.555171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.555202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.555407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.555438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.555632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.555662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.555848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.555878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.556022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.556053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.556171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.556201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.556422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.556454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.556611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.556640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.556761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.556792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.557095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.557130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.557271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.557303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.557488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.557518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.557655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.557684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.557948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.557978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.558235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.558266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.558465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.558496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.558775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.558805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.558987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.559018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.559216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.559254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.559401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.559432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.559558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.559588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.559827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.559991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.560021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.560297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.560540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.560833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.560864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.561116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.561146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.561292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.561455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.561485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.561607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.561638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.074 [2024-07-15 11:39:45.561792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.074 [2024-07-15 11:39:45.561822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.074 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.562075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.562106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.562248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.562283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.562539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.562714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.562743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.562902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.562931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.563077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.563265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.563296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.563596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.563627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.563812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.563842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.564029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.564333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.564364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.564646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.564677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.564906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.564936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.565123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.565154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.565382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.565412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.565547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.565577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.565763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.565793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.566008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.566038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.566286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.566331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.566518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.566548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.566688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.566978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.567009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.567152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.567183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.567334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.567365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.567560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.567591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.567717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.567744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.568019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.568050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.568272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.568303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.568422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.568452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.568707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.568737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.568872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.568901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.075 [2024-07-15 11:39:45.569047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.075 [2024-07-15 11:39:45.569077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.075 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.569218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.569260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.569398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.569427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.569566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.569597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.569749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.569779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.570009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.570039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.570234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.570265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.570399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.570429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.570576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.570606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.570793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.570823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.571104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.571384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.571415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.571610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.571640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.571800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.571830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.572081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.572306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.572496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.572662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.572843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.572984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.573014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.573206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.573243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.573447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.573477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.573588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.573619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.573934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.573965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.574092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.574122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.574262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.574292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.574557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.574589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.574782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.574819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.575041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.575072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.575257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.575289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.575496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.575526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.575652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.575682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.575819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.575849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.576056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.576086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.576285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.576316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.576503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.576533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.576730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.577035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.577065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.577217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.577255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.577527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.577557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.577755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.076 [2024-07-15 11:39:45.577785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.076 qpair failed and we were unable to recover it. 00:29:02.076 [2024-07-15 11:39:45.577975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.578006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.578198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.578235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.578378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.578409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.578649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.578680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.578802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.578833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.579045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.579075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.579273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.579303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.579500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.579531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.579724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.579754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.579960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.580178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.580208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.580415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.580446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.580649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.580679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.580819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.580848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.581041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.581072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.581283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.581314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.581571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.581601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.581719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.581749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.581954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.581984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.582207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.582266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.582499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.582530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.582689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.582720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.582919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.582950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.583156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.583186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.583351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.583383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.583524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.583554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.583749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.583785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.583979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.584009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.584148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.584178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.584389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.584420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.584618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.584648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.584836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.584866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.585117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.585148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.077 [2024-07-15 11:39:45.585335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.077 [2024-07-15 11:39:45.585366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.077 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.585580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.585610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.585882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.585912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.586058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.586089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.586212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.586250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.586376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.586406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.586566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.586596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.586803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.586833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.587027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.587058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.587315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.587346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.587478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.587508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.587711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.587741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.587960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.587990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.588150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.588181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.588377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.588407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.588550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.588580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.588718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.588749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.588885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.588916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.589111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.589141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.589345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.589376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.589521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.589552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.589755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.589785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.589994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.590024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.590219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.590272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.590474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.590504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.590759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.590789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.590977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.591007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.591162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.591192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.591390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.591422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.591610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.591640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.591838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.591868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.592136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.592166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.592319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.592350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.592661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.592697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.592893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.592923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.593114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.593144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.593404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.593435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.593566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.593596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.593785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.593815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.078 qpair failed and we were unable to recover it. 00:29:02.078 [2024-07-15 11:39:45.593951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.078 [2024-07-15 11:39:45.593980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.594181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.594211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.594362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.594392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.594515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.594546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.594730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.594760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.594945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.594976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.595174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.595204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.595332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.595363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.595490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.595770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.595800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.596075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.596106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.596379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.596410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.596561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.596591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.596779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.596810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.596952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.596983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.597113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.597143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.597363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.597395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.597532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.597562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.597747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.597777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.598026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.598056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.598179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.598207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.598435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.598466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.598717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.598747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.598890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.598920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.599133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.599163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.599456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.599487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.599714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.599745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.600022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.600052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.079 [2024-07-15 11:39:45.600296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.079 [2024-07-15 11:39:45.600328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.079 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.600533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.600563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.600672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.600702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.600847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.600877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.601085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.601116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.601337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.601368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.601670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.601848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.601878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.602070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.602101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.602336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.602368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.602554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.602584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.602738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.602768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.603019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.603049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.603252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.603283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.603508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.603538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.603673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.603704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.603917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.603948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.604146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.604176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.604286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.604317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.604539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.604570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.604777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.604807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.605091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.605271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.605441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.605621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.605848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.605996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.606026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.606165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.606195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.606474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.606506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.606703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.606733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.607949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.607980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.608163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.080 [2024-07-15 11:39:45.608194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.080 qpair failed and we were unable to recover it. 00:29:02.080 [2024-07-15 11:39:45.608436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.608468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.608656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.608687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.608880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.608909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.609110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.609140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.609280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.609312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.609532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.609697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.609727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.609938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.609968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.610164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.610194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.610427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.610469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.610751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.610782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.610923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.610953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.611076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.611107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.611307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.611338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.611467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.611630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.611661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.611845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.611875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.612075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.612105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.612309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.612340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.612613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.612643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.612777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.612807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.613032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.613292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.613478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.613627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.613840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.081 [2024-07-15 11:39:45.613974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.081 [2024-07-15 11:39:45.614002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.081 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.614186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.614449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.614484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.614613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.614646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.614842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.614870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.615069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.615099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.615396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.615427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.615613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.615643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.615828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.615858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.616083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.616113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.616375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.616407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.616541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.616571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.616798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.616828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.617968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.617998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.618198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.618238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.618507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.618538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.618684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.618715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.618845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.618875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.619060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.619096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.619293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.619324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.619534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.619564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.619708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.619738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.619945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.619975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.620233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.620263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.372 [2024-07-15 11:39:45.620453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.372 [2024-07-15 11:39:45.620484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.372 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.620670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.620700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.620887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.620917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.621097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.621129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.621313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.621345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.621550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.621581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.621722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.621753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.622033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.622062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.622263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.622295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.622487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.622518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.622739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.622770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.622967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.622997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.623250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.623281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.623401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.623432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.623552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.623582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.623789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.623819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.623940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.623971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.624175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.624206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.624344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.624375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.624559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.624589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.624789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.624820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.624967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.624999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.625272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.625304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.625437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.625467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.625681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.625711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.625837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.625868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.626077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.626107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.626363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.626395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.626649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.626680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.626865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.626896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.627172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.627203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.627414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.627445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.627650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.627680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.627797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.627828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.628011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.628047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.628298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.628330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.628533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.628564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.628696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.628726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.628877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.628908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.629057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.373 [2024-07-15 11:39:45.629088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.373 qpair failed and we were unable to recover it. 00:29:02.373 [2024-07-15 11:39:45.629372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.629404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.629539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.629569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.629692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.629723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.629934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.629964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.630151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.630181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.630373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.630405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.630537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.630567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.630703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.631017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.631047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.631242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.631274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.631527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.631557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.631808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.631838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.631990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.632162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.632416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.632634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.632812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.632966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.632996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.633102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.633132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.633327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.633359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.633612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.633744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.633981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.634012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.634265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.634297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.634490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.634520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.634667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.634697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.634928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.634960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.635223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.635263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.635391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.635422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.635558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.635589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.635782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.635813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.636866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.636897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.637040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.637070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.637321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.637353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.637488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.637519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.637703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-07-15 11:39:45.637734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.374 qpair failed and we were unable to recover it. 00:29:02.374 [2024-07-15 11:39:45.637930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.637960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.638146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.638176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.638460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.638492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.638694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.638724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.638908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.638939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.639218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.639257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.639446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.639476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.639617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.639648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.639932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.639963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.640164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.640194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.640308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.640340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.640529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.640560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.640747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.640777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.640975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.641005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.641139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.641169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.641375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.641405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.641616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.641647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.641847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.641878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.641983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.642013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.642287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.642318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.642523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.642553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.642860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.642891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.643093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.643124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.643314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.643344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.643471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.643502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.643642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.643672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.643873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.643904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.644041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.644072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.644269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.644300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.644421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.644452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.644654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.644684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.644937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.644967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.645153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.645183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.645393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.645429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.645564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.645595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.645780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.645810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.645994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.646024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.646210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.646251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.646460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.646490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.375 [2024-07-15 11:39:45.646624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.375 [2024-07-15 11:39:45.646654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.375 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.646854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.646885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.647028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.647058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.647214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.647266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.647540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.647569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.647764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.647794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.647993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.648024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.648281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.648312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.648513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.648544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.648755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.648903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.648933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.649205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.649242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.649465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.649497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.649723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.649754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.649967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.649997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.650139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.650168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.650400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.650431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.650687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.650717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.650980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.651010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.651201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.651239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.651376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.651406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.651528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.651559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.651814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.651844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.652066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.652243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.652274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.652462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.652493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.652705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.652877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.652908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.653106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.653137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.653339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.653371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.653564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.653595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.653814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.653844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.376 [2024-07-15 11:39:45.654111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.376 [2024-07-15 11:39:45.654141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.376 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.654441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.654472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.654674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.654713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.654822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.654852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.655039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.655069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.655264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.655296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.655522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.655552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.655735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.655765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.656047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.656248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.656279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.656419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.656449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.656654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.656685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.656962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.656993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.657163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.657193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.657394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.657425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.657557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.657586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.657723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.657753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.657952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.657983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.658187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.658216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.658336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.658367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.658547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.658578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.658724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.658754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.658879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.658909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.659098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.659129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.659324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.659355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.659541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.659708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.659738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.659936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.659966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.660112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.660142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.660276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.660308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.660519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.660648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.660678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.660780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.660810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.661030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.661060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.661246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.661276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.661478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.661508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.661704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.661734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.661863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.661894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.662000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.662029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.662216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.662256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.662523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.662554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.377 qpair failed and we were unable to recover it. 00:29:02.377 [2024-07-15 11:39:45.662757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.377 [2024-07-15 11:39:45.662787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.663039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.663070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.663271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.663303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.663485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.663515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.663782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.663812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.663960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.663990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.664182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.664419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.664449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.664637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.664667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.664769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.664799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.664954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.665136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.665166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.665322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.665353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.665549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.665579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.665775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.665804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.666025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.666056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.666246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.666277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.666529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.666560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.666750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.666781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.667033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.667064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.667209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.667248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.667370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.667400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.667607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.667638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.667768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.667799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.668002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.668033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.668174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.668204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.668462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.668493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.668687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.668717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.668857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.668893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.669080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.669110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.669238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.669269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.669464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.669494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.669693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.669723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.669904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.669935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.670199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.670238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.670388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.670418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.670621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.670651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.670775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.670924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.670954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.671137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.671167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.378 [2024-07-15 11:39:45.671317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.378 [2024-07-15 11:39:45.671348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.378 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.671542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.671574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.671804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.671834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.672054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.672084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.672241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.672416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.672447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.672635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.672666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.672860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.672890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.673174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.673205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.673485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.673516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.673748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.673779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.673990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.674020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.674156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.674186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.674419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.674618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.674648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.674782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.674813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.675017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.675194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.675232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.675462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.675493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.675632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.675662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.675899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.675930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.676120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.676150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.676284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.676315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.676498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.676529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.676770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.676800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.676930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.676960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.677088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.677118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.677419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.677451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.677613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.677649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.677817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.677847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.677992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.678022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.678216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.678278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.678533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.678563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.678747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.678778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.678982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.679151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.679412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.679574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.679727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.679914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.679944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.680078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.680109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.379 [2024-07-15 11:39:45.680374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.379 [2024-07-15 11:39:45.680406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.379 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.680544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.680575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.681115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.681156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.681461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.681494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.681655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.681686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.681870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.681900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.682085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.682115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.682303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.682336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.682520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.682550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.682700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.682730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.682884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.682913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.683095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.683125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.683243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.683275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.683421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.683451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.683686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.683717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.683853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.683884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.684092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.684123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.684342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.684374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.684495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.684526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.684780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.684809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.684957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.684986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.685241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.685272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.685487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.685517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.685774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.685804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.685996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.686026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.686167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.686197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.686368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.686400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.686536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.686572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.686775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.686805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.686985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.687016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.687154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.687185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.687390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.687421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.687555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.687584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.687817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.688005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.688035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.380 qpair failed and we were unable to recover it. 00:29:02.380 [2024-07-15 11:39:45.688249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.380 [2024-07-15 11:39:45.688281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.688408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.688438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.688635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.688664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.688850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.688880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.689069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.689099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.689281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.689311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.689446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.689477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.689772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.689802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.690084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.690115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.690242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.690273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.690479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.690509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.690697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.690726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.690851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.690882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.691078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.691108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.691307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.691338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.691623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.691653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.691922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.691952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.692153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.692184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.692325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.692357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.692518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.692549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.692766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.692797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.692929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.692959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.693209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.693250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.693468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.693499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.693620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.693649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.693840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.693870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.694000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.694030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.694171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.694201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.694354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.694385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.694573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.694790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.694820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.695005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.695035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.695222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.695268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.695526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.695556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.695752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.695782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.695910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.695940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.696078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.696107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.696248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.696279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.696473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.381 [2024-07-15 11:39:45.696504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.381 qpair failed and we were unable to recover it. 00:29:02.381 [2024-07-15 11:39:45.696639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.696668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.696817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.696847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.696970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.697117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.697282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.697462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.697625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.697794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.697824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.698045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.698075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.698214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.698252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.698357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.698388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.698595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.698625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.698874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.698904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.699100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.699131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.699319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.699350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.699540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.699570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.699853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.699884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.700075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.700106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.700311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.700343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.700469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.700500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.700694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.700725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.700976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.701007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.701205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.701243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.701444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.701475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.701674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.701704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.701856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.701887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.702087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.702117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.702302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.702334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.702468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.702498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.702715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.702745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.702868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.702898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.703088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.703119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.703247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.703277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.703410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.703445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.703640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.703671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.703888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.703919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.704105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.704135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.704327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.704358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.704499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.704528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.704666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.704696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.382 [2024-07-15 11:39:45.704929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-07-15 11:39:45.704959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.382 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.705145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.705176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.705331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.705362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.705478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.705508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.705636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.705667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.705816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.705847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.706112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.706142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.706301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.706333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.706469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.706499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.706687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.706717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.706945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.706975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.707171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.707201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.707504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.707535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.707718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.707932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.707962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.708121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.708420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.708450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.708636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.708666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.708853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.708883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.709075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.709105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.709329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.709361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.709508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.709539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.709684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.709715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.709918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.709948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.710077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.710107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.710298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.710329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.710521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.710552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.710674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.710704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.710838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.710868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.711027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.711058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.711207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.711244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.711435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.711466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.711718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.711749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.711952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.711987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.712268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.712299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.712579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.712610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.712829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.712859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.712986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.713016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.713155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.713184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.713467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.713498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.383 [2024-07-15 11:39:45.713619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.383 [2024-07-15 11:39:45.713649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.383 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.713869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.713899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.714168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.714198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.714411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.714442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.714633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.714662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.714876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.714906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.715104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.715135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.715448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.715481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.715756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.715787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.715974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.716004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.716126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.716155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.716385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.716416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.716548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.716578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.716828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.716859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.717044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.717074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.717232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.717263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.717553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.717584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.717709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.717740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.717888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.717919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.718126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.718156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.718392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.718423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.718721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.718751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.718960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.718990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.719113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.719143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.719340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.719371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.719530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.719560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.719684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.719713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.719857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.719886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.720165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.720196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.720475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.720507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.720641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.720671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.720805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.720835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.720960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.720990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.721172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.721207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.721422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.721454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.721686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.721716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.721929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.721959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.722137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.722331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.722363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.722499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.722530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.722790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.722820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.384 qpair failed and we were unable to recover it. 00:29:02.384 [2024-07-15 11:39:45.722956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.384 [2024-07-15 11:39:45.722986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.723205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.723244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.723378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.723408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.723622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.723652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.723789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.723820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.723975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.724005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.724147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.724177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.724400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.724431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.724640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.724670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.724795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.724824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.725889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.725919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.726106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.726136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.726268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.726300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.726438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.726469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.726657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.726688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.726892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.726923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.727128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.727158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.727431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.727462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.727675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.727705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.727898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.727928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.728076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.728107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.728385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.728417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.728542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.728572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.385 qpair failed and we were unable to recover it. 00:29:02.385 [2024-07-15 11:39:45.728853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.385 [2024-07-15 11:39:45.728884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.729079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.729109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.729390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.729422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.729608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.729638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.729837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.729873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.730006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.730037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.730425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.730457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.730750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.730781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.730975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.731136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.731305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.731466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.731695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.731915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.731945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.732080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.732110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.732304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.732334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.732478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.732508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.732733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.732763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.733021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.733050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.733243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.733274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.733423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.733453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.733643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.733673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.733876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.733907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.734105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.734134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.734365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.734396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.734531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.734561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.734670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.734701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.734908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.734937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.735154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.735185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.735316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.735348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.735582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.735613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.735925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.735994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.736201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.736249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.736400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.736432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.736631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.736661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.736859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.386 [2024-07-15 11:39:45.736889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.386 qpair failed and we were unable to recover it. 00:29:02.386 [2024-07-15 11:39:45.737109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.737141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.737273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.737304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.737532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.737563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.737691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.737723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.737999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.738029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.738205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.738245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.738430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.738462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.738653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.738684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.738892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.739139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.739170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.739366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.739398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.739591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.739622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.739772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.739803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.739991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.740021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.740170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.740201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.740345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.740377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.740627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.740658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.740864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.740896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.741119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.741150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.741323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.741354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.741604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.741635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.741827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.741858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.742043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.742268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.742433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.742653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.742866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.742983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.743014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.743296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.743329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.743472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.743503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.743641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.743673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.743879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.743909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.744174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.744204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.744411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.744442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.744578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.744608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.744874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.744945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.745197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.745252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.745515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.745547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.745735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.745765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.746071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.746102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.387 [2024-07-15 11:39:45.746251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.387 [2024-07-15 11:39:45.746283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.387 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.746498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.746529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.746683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.746714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.746994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.747025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.747218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.747257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.747445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.747475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.747688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.747719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.747923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.747954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.748100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.748147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.748313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.748344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.748558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.748589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.748894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.748926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.749130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.749161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.749380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.749412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.749689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.749720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.749900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.749932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.750059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.750089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.750312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.750343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.750478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.750509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.750783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.750813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.751033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.751063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.751258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.751289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.751522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.751553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.751738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.751769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.751902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.751933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.752185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.752216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.752504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.752535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.752720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.752750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.752878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.752909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.753163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.753193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.753348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.753381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.388 [2024-07-15 11:39:45.753517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.388 [2024-07-15 11:39:45.753547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.388 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.753766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.753796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.754002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.754033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.754191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.754221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.754438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.754473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.754725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.754756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.754877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.754908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.755160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.755190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.755396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.755429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.755566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.755596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.755740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.755770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.755916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.755948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.756147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.756178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.756333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.756364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.756557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.756587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.756773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.756803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.756987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.757018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.757233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.757269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.757459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.757490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.757741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.757772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.757905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.757936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.758138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.758169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.758368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.758400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.758595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.758626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.758770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.758800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.759003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.759034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.759244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.759280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.759462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.759492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.759687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.759717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.759836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.759866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.760050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.760080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.760340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.760374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.760511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.760544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.760770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.760802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.760938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.760969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.761102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.761134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.761267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.761298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.761416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.761447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.761646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.761677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.761858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.761888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.762139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.389 [2024-07-15 11:39:45.762169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.389 qpair failed and we were unable to recover it. 00:29:02.389 [2024-07-15 11:39:45.762376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.762408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.762607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.762637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.762903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.762933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.763143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.763173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.763368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.763399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.763593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.763624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.763814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.763844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.763986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.764016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.764268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.764300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.764490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.764521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.764650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.764680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.764876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.764906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.765104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.765135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.765269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.765300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.765495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.765526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.765792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.765822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.765954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.765989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.766112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.766142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.766259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.766290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.766564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.766595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.766723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.766754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.766936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.766967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.767090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.767120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.767254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.390 [2024-07-15 11:39:45.767285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.390 qpair failed and we were unable to recover it. 00:29:02.390 [2024-07-15 11:39:45.767470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.767501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.767752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.767782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.767986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.768017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.768204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.768244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.768448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.768478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.768751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.768782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.768928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.769206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.769249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.769392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.769422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.769652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.769682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.769906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.769936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.770124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.770154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.770353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.770384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.770569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.770600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.770810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.770841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.770967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.770998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.771208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.771246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.771436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.771467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.771691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.771722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.771867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.771898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.772043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.772074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.772320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.772351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.772494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.772524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.772673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.772704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.772955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.772985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.773116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.773147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.773264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.773295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.773476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.773506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.773803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.773833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.774023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.774053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.774250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.774281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.774420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.774451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.774588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.774618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.774874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.774904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.775026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.775055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.775242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.775273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.775467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.775498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.775632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.775664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.775864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.391 [2024-07-15 11:39:45.775894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.391 qpair failed and we were unable to recover it. 00:29:02.391 [2024-07-15 11:39:45.776116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.776146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.776348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.776379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.776601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.776631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.776817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.776847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.777910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.777940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.778192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.778222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.778509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.778539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.778711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.778742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.778918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.779137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.779168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.779310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.779342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.779485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.779516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.779707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.779738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.780011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.780042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.780236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.780268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.780372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.780408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.780592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.780623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.780828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.780860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.781063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.781094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.781279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.781311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.781496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.781527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.781800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.781829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.782029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.782059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.782311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.782342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.782640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.782870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.782900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.783129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.783160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.783375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.783405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.783673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.783703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.783914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.783944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.784134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.784164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.392 [2024-07-15 11:39:45.784386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-07-15 11:39:45.784416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.392 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.784559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.784589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.784791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.784822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.785011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.785041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.785180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.785210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.785373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.785404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.785682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.785712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.785912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.786075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.786105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.786378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.786408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.786624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.786654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.786851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.786882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.787169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.787199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.787344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.787375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.787584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.787615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.787882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.787913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.788185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.788215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.788421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.788452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.788653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.788684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.788883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.788914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.789049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.789080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.789209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.789258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.789471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.789502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.789723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.789753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.789948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.789983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.790169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.790200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.790417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.790448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.790634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.790664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.790795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.790825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.791099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.791130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.791276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.791308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.791444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.791474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.393 [2024-07-15 11:39:45.791748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-07-15 11:39:45.791778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.393 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.791926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.791957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.792236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.792268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.792395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.792425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.792623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.792785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.792816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.793024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.793054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.793185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.793215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.793439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.793469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.793662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.793692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.793884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.793915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.794043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.794073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.794180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.794209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.794512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.794543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.794823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.794854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.795002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.795032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.795157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.795187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.795381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.795413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.795532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.795563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.795772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.795802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.796079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.796109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.796380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.796412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.796639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.796670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.796875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.796905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.797035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.797264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.797294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.797479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.797509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.797711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.797741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.797863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.797893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.798077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.798108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.798305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.798336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.798609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.798640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.798792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.798828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.799108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.799139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.799404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.799436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.799632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.799663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.799856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.799886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.800022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.800052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.800346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.800377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.394 qpair failed and we were unable to recover it. 00:29:02.394 [2024-07-15 11:39:45.800661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.394 [2024-07-15 11:39:45.800692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.800873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.800903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.801084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.801114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.801367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.801399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.801600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.801630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.801909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.801939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.802143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.802174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.802393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.802425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.802617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.802648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.802830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.802860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.803149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.803180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.803375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.803660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.803690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.803889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.803919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.804189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.804219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.804511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.804541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.804685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.804715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.804917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.804948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.805198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.805237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.805416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.805446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.805703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.805734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.805864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.806096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.806126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.806323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.806560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.806590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.806778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.806808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.806951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.806981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.807114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.807146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.807284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.807316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.807454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.807484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.807759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.807790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.807910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.807940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.808135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.808164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.808300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.395 [2024-07-15 11:39:45.808336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.395 qpair failed and we were unable to recover it. 00:29:02.395 [2024-07-15 11:39:45.808553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.808583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.808785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.808816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.809002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.809032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.809167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.809197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.809396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.809428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.809682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.809713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.809961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.809992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.810121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.810150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.810287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.810318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.810596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.810626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.810879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.810909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.811026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.811057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.811251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.811281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.811520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.811550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.811687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.811717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.811837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.811867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.812054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.812085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.812268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.812300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.812525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.812556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.812750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.812780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.813050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.813080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.813333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.813364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.813546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.813577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.813770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.813800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.814081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.814111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.814241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.814272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.814470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.814501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.814623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.814653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.814786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.814816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.815050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.815276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.815307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.815445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.815475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.815691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.815721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.816946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.816976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.817179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.817215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-07-15 11:39:45.817426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.396 [2024-07-15 11:39:45.817457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.817707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.817738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.817871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.817901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.818207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.818247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.818510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.818540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.818730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.818761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.819055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.819085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.819359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.819391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.819524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.819555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.819756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.819786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.819922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.819952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.820075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.820106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.820290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.820321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.820581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.820612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.820906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.820937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.821200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.821253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.821451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.821481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.821680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.821711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.821921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.821953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.822157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.822187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.822392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.822424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.822676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.822706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.822854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.822885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.823084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.823114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.823259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.823290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.823437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.823467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.823613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.823644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.823857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.823887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.824078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.824109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.824309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.824356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.824628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.824657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.824907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.824936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.825072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.825102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.825250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.825281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.825420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.825450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.825575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.825606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.825812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.825842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.826027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.826058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.826246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.826278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.826414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.826450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.826600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.397 [2024-07-15 11:39:45.826630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-07-15 11:39:45.826763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.826793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.826976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.827005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.827254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.827286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.827433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.827463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.827602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.827632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.827758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.827788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.827985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.828015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.828269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.828300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.828496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.828527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.828729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.828759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.828963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.828993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.829197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.829233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.829419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.829449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.829710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.829741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.829869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.829900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.830086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.830116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.830365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.830397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.830675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.830705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.830892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.830922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.831110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.831141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.831391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.831423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.831529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.831560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.831744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.831773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.831969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.831999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.832208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.832246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.832531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.832561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.832790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.832820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.833033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.833063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.833282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.833313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.833565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.833595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.833784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.833814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.834068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.834098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.834254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.834284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.834493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.834524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.834788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.834818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.835071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.835101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.835320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.835352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.835555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.835586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.835731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.835767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.835898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.835928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.836124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-07-15 11:39:45.836155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-07-15 11:39:45.836351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.836383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.836570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.836601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.836875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.836905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.837164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.837194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.837455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.837486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.837687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.837717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.837965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.837996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.838178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.838208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.838424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.838455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.838715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.838746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.838929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.838960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.839101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.839131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.839328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.839359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.839671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.839702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.839978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.840008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.840192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.840503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.840533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.840682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.840712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.840899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.840928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.841063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.841094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.841275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.841307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.841510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.841541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.841666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.841696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.841967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.841997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.842135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.842167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.842389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.842420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.842544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.842575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.842791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.842821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.843010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.843040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.843234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.843268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.843471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.843707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.843736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.843984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.844014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.844161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.399 [2024-07-15 11:39:45.844191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.399 qpair failed and we were unable to recover it. 00:29:02.399 [2024-07-15 11:39:45.844383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.844414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.844600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.844630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.844826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.844857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.845004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.845039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.845313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.845345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.845469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.845499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.845634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.845855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.845885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.846079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.846110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.846259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.846289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.846435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.846466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.846647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.846677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.846952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.846983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.847135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.847165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.847382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.847413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.847545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.847576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.847795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.847826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.848084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.848115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.848244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.848275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.848526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.848556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.848742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.848772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.848960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.848990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.849246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.849277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.849412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.849443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.849567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.849596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.849895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.850044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.850074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.850271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.850302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.850434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.850465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.850717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.850747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.850898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.850930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.851120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.851150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.851345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.851376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.851559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.851588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.851777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.851807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.851960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.851989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.852288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.852318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.852568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.852598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.852744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.852773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.852972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.853002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.400 [2024-07-15 11:39:45.853134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.400 [2024-07-15 11:39:45.853164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.400 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.853365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.853396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.853671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.853702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.853903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.853938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.854142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.854172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.854333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.854364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.854637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.854667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.854852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.854883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.855077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.855108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.855397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.855429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.855577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.855608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.855859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.855889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.856074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.856104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.856301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.856332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.856539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.856843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.857031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.857060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.857193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.857223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.857484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.857515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.857747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.857777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.857902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.857932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.858156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.858411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.858442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.858721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.858751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.858937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.858967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.859161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.859192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.859399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.859430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.859682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.859711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.859905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.859936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.860063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.860094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.860241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.860273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.860402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.860433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.860557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.860588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.860839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.860868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.861053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.861082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.861213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.861249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.861438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.861654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.861867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.861897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.862028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.862058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.862336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.401 [2024-07-15 11:39:45.862367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.401 qpair failed and we were unable to recover it. 00:29:02.401 [2024-07-15 11:39:45.862619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.862649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.862850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.862880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.863130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.863166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.863352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.863384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.863515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.863544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.863741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.863771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.863980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.864010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.864290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.864321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.864507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.864538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.864804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.864834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.865114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.865144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.865340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.865370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.865502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.865532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.865728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.865758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.865953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.865984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.866115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.866145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.866338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.866369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.866565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.866595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.866744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.866775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.866913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.866944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.867127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.867156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.867343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.867374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.867559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.867589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.867818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.867993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.868023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.868165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.868195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.868414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.868446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.868640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.868669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.868893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.868923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.869131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.869162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.869307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.869339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.869540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.869570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.869775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.869804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.869988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.870019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.870236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.870268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.870416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.870447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.870576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.870607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.870890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.870921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.871113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.871143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.871417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.871448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.871659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.402 [2024-07-15 11:39:45.871689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.402 qpair failed and we were unable to recover it. 00:29:02.402 [2024-07-15 11:39:45.871908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.871938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.872128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.872163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.872426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.872457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.872657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.872688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.872959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.872990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.873197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.873234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.873421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.873451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.873654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.873684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.873963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.873993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.874142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.874172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.874323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.874355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.874560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.874590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.874810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.874841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.875037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.875067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.875276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.875457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.875488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.875591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.875621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.875815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.875845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.876012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.876043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.876297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.876328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.876532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.876562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.876740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.876770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.876990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.877021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.877214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.877252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.877369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.877399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.877594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.877625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.877758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.877789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.877997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.878027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.878217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.878255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.878399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.878430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.878640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.878670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.878785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.878816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.879031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.879060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.879197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.879254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.403 qpair failed and we were unable to recover it. 00:29:02.403 [2024-07-15 11:39:45.879450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.403 [2024-07-15 11:39:45.879481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.879762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.879792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.879992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.880022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.880274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.880305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.880495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.880526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.880708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.880739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.880884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.880914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.881164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.881199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.881488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.881519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.881667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.881697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.881830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.881860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.882141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.882172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.882391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.882422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.882678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.882708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.882903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.882933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.883139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.883169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.883298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.883329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.883628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.883659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.883810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.884141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.884172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.884397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.884428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.884630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.884661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.884837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.884868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.885061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.885092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.885194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.885235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.885426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.885457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.885657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.885687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.885884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.885914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.886114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.886145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.886423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.886454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.886658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.886689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.886819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.886849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.886967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.886998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.887113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.887144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.887344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.887376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.887576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.887607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.887793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.887823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.888025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.888055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.888270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.888301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.888460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.888491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.888658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.404 [2024-07-15 11:39:45.888689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.404 qpair failed and we were unable to recover it. 00:29:02.404 [2024-07-15 11:39:45.888886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.888916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.889125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.889156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.889406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.889437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.889689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.889719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.889915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.889946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.890081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.890111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.890254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.890285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.890542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.890573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.890872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.890902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.891176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.891206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.891450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.891481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.891614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.891645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.891897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.891928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.892075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.892105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.892297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.892328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.892514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.892549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.892743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.892773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.892975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.893005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.893198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.893249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.893523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.893554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.893698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.893728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.893864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.893895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.894029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.894059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.894255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.894287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.894481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.894512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.894723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.894753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.895048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.895079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.895359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.895390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.895534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.895564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.895714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.895745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.895881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.895912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.896122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.896153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.896360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.896391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.896530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.896566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.896716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.896746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.896931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.896961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.897212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.897250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.897451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.897482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.897632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.897662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.897806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.897837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.405 qpair failed and we were unable to recover it. 00:29:02.405 [2024-07-15 11:39:45.898023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.405 [2024-07-15 11:39:45.898054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.898243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.898274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.898551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.898782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.898813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.899000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.899030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.899258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.899291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.899484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.899748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.899779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.899983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.900013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.900199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.900236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.900439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.900470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.900667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.900697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.900845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.901076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.901107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.901359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.901390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.901605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.901636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.901902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.902238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.902271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.902502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.902533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.902826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.902856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.903060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.903091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.903294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.903324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.903602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.903632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.903773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.903804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.903989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.904019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.904286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.904317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.904528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.904830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.904861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.905079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.905109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.905357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.905388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.905668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.905698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.905910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.905940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.906217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.906256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.906531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.906571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.906709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.906739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.907011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.907041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.907296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.907327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.907585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.907811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.907842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.908027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.908057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.908258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.406 [2024-07-15 11:39:45.908289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.406 qpair failed and we were unable to recover it. 00:29:02.406 [2024-07-15 11:39:45.908539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.908570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.908804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.908834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.909086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.909117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.909321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.909352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.909627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.909658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.909909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.909939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.910139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.910169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.910372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.910403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.910676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.910707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.910981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.911012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.911279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.911310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.911541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.911572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.911755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.911798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.912105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.912135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.912430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.912461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.912724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.912754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.912973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.913004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.913265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.913296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.913597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.913627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.913833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.913863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.914136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.914167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.914476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.914507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.914696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.914726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.914947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.914978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.915167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.915197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.915411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.915442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.915642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.915672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.915948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.915978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.916242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.916274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.916547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.916577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.916851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.916882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.917069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.917100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.917353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.917390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.917577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.917607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.917857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.917888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.918121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.918152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.918455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.918486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.918758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.407 qpair failed and we were unable to recover it. 00:29:02.407 [2024-07-15 11:39:45.918936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.407 [2024-07-15 11:39:45.918968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.919170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.919200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.919482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.919513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.919788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.919818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.919954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.919984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.920288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.920592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.920622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.920827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.920857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.921134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.921164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.921466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.921498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.921774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.921805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.922024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.922053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.922334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.922365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.922669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.922699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.922971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.923000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.923305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.923336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.923614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.923644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.923949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.923978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.924111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.924141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.924397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.924428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.924579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.924610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.924830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.924861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.925156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.925348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.925379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.925655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.925686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.925959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.925989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.926268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.926299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.926568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.926598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.926824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.926854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.927078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.927108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.927313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.927343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.927491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.927521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.927758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.927788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.927950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.927980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.928254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.928289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.928506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.928536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.408 [2024-07-15 11:39:45.928747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.408 [2024-07-15 11:39:45.928777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.408 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.928965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.928995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.929258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.929291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.929503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.929533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.929689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.929719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.929909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.929939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.930129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.930159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.930459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.930491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.930849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.931053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.931083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.931280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.931311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.931528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.409 [2024-07-15 11:39:45.931822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.409 [2024-07-15 11:39:45.931853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.409 qpair failed and we were unable to recover it. 00:29:02.727 [2024-07-15 11:39:45.932123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.932167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.932405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.932438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.932642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.932673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.932837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.932867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.933133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.933414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.933444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.933699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.933729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.933918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.933948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.934079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.934109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.934309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.934339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.934617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.934647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.934848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.934878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.935084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.935114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.935308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.935340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.935544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.935575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.935782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.935812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.936003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.936035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.936220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.936262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.936383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.936413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.936668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.936699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.936989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.937020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.937242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.937274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.937469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.937499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.937704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.937734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.937852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.937883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.938177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.938212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.938505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.938536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.938833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.938864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.939137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.939167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.939371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.939403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.939606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.939636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.939844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.939875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.940120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.940150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.940442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.940614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.940645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.940833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.940864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.941083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.941114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.941339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.941369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.941621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.941651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.941965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.941996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.942182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.942212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.942420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.942452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.942641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.942671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.942946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.942977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.943180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.943210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.943428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.943459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.943652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.728 [2024-07-15 11:39:45.943682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.728 qpair failed and we were unable to recover it. 00:29:02.728 [2024-07-15 11:39:45.943820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.943850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.943984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.944015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.944238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.944270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.944484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.944516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.944706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.944737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.944905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.944935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.945254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.945286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.945547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.945577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.945761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.945792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.946048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.946078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.946278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.946311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.946587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.946617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.946838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.946869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.947105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.947135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.947411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.947442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.947753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.947784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.948058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.948089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.948344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.948375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.948648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.948686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.948805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.948835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.949057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.949087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.949304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.949335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.949532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.949562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.949860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.949890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.950204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.950246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.950370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.950400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.950675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.950706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.950856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.950886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.951101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.951131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.951342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.951374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.951561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.951591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.951795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.951833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.951994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.952024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.952171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.952202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.952440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.952471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.952674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.952704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.952971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.953002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.953153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.953184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.953386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.953417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.953680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.729 [2024-07-15 11:39:45.953711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.729 qpair failed and we were unable to recover it. 00:29:02.729 [2024-07-15 11:39:45.953917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.953948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.954201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.954239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.954497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.954528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.954803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.954834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.954971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.955002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.955285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.955318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.955526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.955556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.955746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.955776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.955971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.956001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.956238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.956270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.956422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.956452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.956663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.956694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.956964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.956995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.957216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.957272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.957554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.957584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.957859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.957889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.958147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.958177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.958491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.958522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.958789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.958830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.959022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.959053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.959255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.959404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.959434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.959734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.959764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.960023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.960053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.960243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.960275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.960487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.960518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.960796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.960826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.961063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.961093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.961387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.961418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.961631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.961660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.961856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.961886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.962159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.962190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.962511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.962543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.962819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.962850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.963050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.963079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.963245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.963277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.963548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.963578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.963831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.963862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.964000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.964031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.964219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.964445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.964477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.964694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.730 [2024-07-15 11:39:45.964725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.730 qpair failed and we were unable to recover it. 00:29:02.730 [2024-07-15 11:39:45.964849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.964880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.965177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.965207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.965491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.965522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.965813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.965843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.966090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.966120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.966257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.966450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.966480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.966752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.966782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.967045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.967076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.967291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.967322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.967542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.967572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.967817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.968062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.968093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.968359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.968391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.968647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.968677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.968872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.968902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.969184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.969220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.969437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.969469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.969614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.969644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.969775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.969806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.970085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.970115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.970263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.970294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.970435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.970466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.970722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.970752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.970903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.970934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.971145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.971388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.971419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.971643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.971675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.971861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.971892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.972089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.972120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.972405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.972701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.972731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.972930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.972961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.973243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.973275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.973581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.973612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.973820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.973850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.974108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.974138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.974355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.974388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.974526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.974557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.731 [2024-07-15 11:39:45.974764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.731 [2024-07-15 11:39:45.974795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.731 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.975075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.975106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.975306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.975337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.975596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.975627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.975826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.975858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.976069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.976100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.976363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.976395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.976601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.976632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.976922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.976953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.977263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.977294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.977569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.977600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.977833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.977864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.978091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.978121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.978405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.978437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.978725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.978755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.979056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.979087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.979285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.979316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.979460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.979496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.979652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.979683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.979909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.979940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.980191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.980222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.980444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.980476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.980615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.980646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.980797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.980827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.981064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.981096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.981350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.981382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.981596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.981627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.981817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.981847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.732 [2024-07-15 11:39:45.981981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 11:39:45.982011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.732 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.982141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.982172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.982330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.982362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.982612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.982644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.982914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.982945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.983246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.983278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.983519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.983551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.983878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.983911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.984202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.984245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.984525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.984556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.984841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.984872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.985100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.985131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.985419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.985451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.985657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.985688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.985951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.985982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.986105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.986134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.986399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.986432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.986650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.986680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.990280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.990338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.990563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.990599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.990821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.990857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.991014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.991047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.991333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.991369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.991665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.991701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.992654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.992694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.993004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.993037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.993252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.993287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.993532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.993566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.993856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.993894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.994112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.994152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.994439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.994472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.994646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.994677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.994965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.994996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.995282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.995314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.995481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.995512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.995668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.995700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.996471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.996517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.996765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.996797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.997038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.997070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.997275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.997309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.997458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.997492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.997659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.997690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.997836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.997869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.998101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.998134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.998361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.998395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.998594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.998628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.733 qpair failed and we were unable to recover it. 00:29:02.733 [2024-07-15 11:39:45.998822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 11:39:45.998853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:45.999013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:45.999045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:45.999254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:45.999281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:45.999463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:45.999494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:45.999773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:45.999806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:45.999975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.000133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.000372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.000544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.000711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.000937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.000963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.001149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.001175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.001383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.001409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.001668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.001693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.001979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.002011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.002161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.002192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.002431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.002464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.002660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.002685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.002940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.002965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.003211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.003247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.003723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.003755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.003961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.004005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.004132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.004157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.004320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.004353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.004488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.004765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.004790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.005085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.005110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.005261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.005288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.005585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.005610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.005763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.005788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.005971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.005996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.006126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.006152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.006358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.006385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.006503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.006528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.006778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.006804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.007132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.007164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.007377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.007409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.007564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.007597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.007742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.007773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.007976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.008006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.008200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.008244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.008385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.008416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.008700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.008731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.008939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.008971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.009222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.009266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.009433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.009465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.009677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.009709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.010028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.010059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.010277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.010311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.010459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.010490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.010768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.010800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.011000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.011031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.011312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.011346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.011562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.011594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.011808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.011841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.012072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.012270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.012302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.012449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.012482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.012628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.012660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.012943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.012974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.013272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.013306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.013594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.013625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.013861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.013893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.014145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.014181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.014363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.014399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.014661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.014693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.015016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.015048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.015333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.015368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.734 qpair failed and we were unable to recover it. 00:29:02.734 [2024-07-15 11:39:46.015672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.734 [2024-07-15 11:39:46.015704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.015854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.015886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.016097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.016129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.016396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.016428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.016626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.016658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.016893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.016925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.017125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.017157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.017324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.017357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.017489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.017521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.017754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.017786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.018084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.018116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.018333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.018366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.018630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.018661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.018942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.018974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.019180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.019211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.019430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.019462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.019657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.019688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.019918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.019949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.020171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.020203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.020367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.020399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.020692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.020725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.020936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.020967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.021260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.021295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.021558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.021589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.021754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.021786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.022004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.022036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.022305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.022339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.022614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.022647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.022891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.022922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.023192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.023223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.023472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.023504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.023759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.023791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.024075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.024107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.024307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.024341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.024635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.024668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.024953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.024985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.025191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.025223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.025437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.025470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.025664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.025697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.026037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.026070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.026291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.026324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.026523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.026556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.026871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.026904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.027104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.027136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.027426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.027459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.027758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.027790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.028085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.028116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.028374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.028407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.028629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.028660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.028820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.028853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.029120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.029151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.029372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.029404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.029554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.029585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.029851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.029883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.030108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.030412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.030445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.030653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.030685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.030984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.031015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.031309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.031342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.031633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.031664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.031969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.032001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.032240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.032273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.032432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.032741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.032773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.033087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.033119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.033348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.033381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.033624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.033656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.033957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.033989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.735 [2024-07-15 11:39:46.034293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.735 [2024-07-15 11:39:46.034326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.735 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.034535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.034567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.034800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.034833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.035097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.035129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.035336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.035369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.035643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.035675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.035966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.035998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.036268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.036301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.036613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.036646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.036863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.036894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.037127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.037159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.037376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.037409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.037622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.037654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.038010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.038362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.038396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.038631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.038663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.038984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.039017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.039234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.039268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.039475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.039507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.039726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.039758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.039909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.039941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.040160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.040193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.040491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.040525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.040768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.040800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.040997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.041028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.041270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.041304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.041526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.041558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.041777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.041810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.042111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.042143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.042413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.042448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.042747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.042780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.042999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.043030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.043317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.043350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.043663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.043696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.043892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.043929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.044162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.044194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.044478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.044511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.044735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.044767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.045035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.045067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.045361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.045394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.045689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.045721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.045932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.045964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.046253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.046286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.046515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.046547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.046775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.046807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.047099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.047130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.047371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.047404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.047724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.047756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.047892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.047925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.048142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.048175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.048396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.048430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.048625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.048656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.048953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.048985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.049301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.049333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.049613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.049645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.049935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.049967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.050260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.050293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.050528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.050559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.050795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.050828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.051109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.051141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.051390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.051423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.051637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.051669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.051867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.051899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.052050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.052082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.052288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.052320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.052537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.052569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.052857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.052890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.053204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.053245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.053455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.053487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.053759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.053790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.054090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.054124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.054409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.054443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.054659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.054691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.054886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.054917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.055208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.055254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.055521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.055553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.055843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.736 [2024-07-15 11:39:46.055876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.736 qpair failed and we were unable to recover it. 00:29:02.736 [2024-07-15 11:39:46.056095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.056128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.056330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.056363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.056603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.056634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.056833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.056865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.057176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.057208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.057485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.057518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.057815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.057847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.058087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.058118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.058409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.058443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.058585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.058617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.058852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.058883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.059104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.059138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.059354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.059387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.059582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.059614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.059829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.059860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.060015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.060048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.060339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.060371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.060670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.060702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.060919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.060951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.061176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.061434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.061466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.061782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.061814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.062011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.062270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.062305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.062604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.062638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.062835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.062867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.063065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.063097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.063388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.063421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.063649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.063680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.063892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.063924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.064246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.064571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.064603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.064891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.064923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.065074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.065105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.065373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.065407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.065642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.065674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.065975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.066008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.066248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.066287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.066576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.066608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.066902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.066934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.067219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.067261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.067411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.067444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.067759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.067791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.067938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.067970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.068259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.068292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.068582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.068613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.068830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.068861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.069080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.069111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.069314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.069346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.069621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.069868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.069899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.070111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.070143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.070432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.070466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.070714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.070746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.070905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.070936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.071237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.071270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.071565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.071597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.071880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.071912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.072204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.072244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.072470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.072734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.072766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.072964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.072996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.073306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.073339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.073576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.073608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.073940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.073972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.074190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.074222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.074571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.074603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.074803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.074834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.737 [2024-07-15 11:39:46.075098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.737 [2024-07-15 11:39:46.075130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.737 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.075416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.075450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.075662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.075693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.075893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.075925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.076145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.076177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.076427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.076461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.076691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.076723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.076935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.076967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.077264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.077298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.077589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.077626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.077939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.077971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.078257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.078290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.078487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.078519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.078679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.078711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.079013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.079046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.079360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.079394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.079623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.079656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.079852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.079885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.080087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.080118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.080272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.080305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.080543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.080575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.080870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.080901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.081131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.081163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.081448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.081482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.081722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.081755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.082026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.082058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.082371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.082404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.082612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.082644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.082914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.082946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.083259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.083292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.083574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.083605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.083750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.083783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.083981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.084013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.084300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.084332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.084539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.084571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.084815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.084847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.085142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.085174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.085333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.085367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.085583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.085614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.085842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.085874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.086145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.086178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.086429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.086462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.086752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.086783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.087023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.087055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.087331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.087364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.087578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.087610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.087817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.087850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.088057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.088089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.088377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.088410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.088606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.088643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.088889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.088921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.089062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.089094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.089295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.089328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.089596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.089629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.089941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.089973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.090275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.090307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.090526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.090558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.090708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.090978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.091010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.091300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.091334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.091603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.091635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.091933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.091965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.092201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.092242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.092559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.092592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.092790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.092822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.093089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.093121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.093428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.093740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.093771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.093913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.093945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.094259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.094292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.094522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.094554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.094753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.094785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.094983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.095015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.095247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.095282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.095512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.095722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.095754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.095978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.096011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.738 [2024-07-15 11:39:46.096298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.738 [2024-07-15 11:39:46.096332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.738 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.096602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.096634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.096908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.096940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.097164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.097196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.097475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.097507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.097783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.097815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.098045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.098283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.098316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.098590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.098623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.098943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.098974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.099182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.099215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.099543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.099575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.099728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.100053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.100085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.100321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.100355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.100487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.100520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.100743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.100776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.101088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.101120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.101425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.101458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.101696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.101728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.101945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.101976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.102262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.102295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.102536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.102568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.102845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.102877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.103115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.103148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.103383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.103415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.103693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.103726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.103934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.103967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.104255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.104288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.104530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.104561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.104839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.104871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.105112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.105144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.105436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.105469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.105688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.105720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.105927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.105959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.106258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.106292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.106585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.106617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.106904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.106936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.107179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.107211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.107514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.107547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.107844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.107877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.108167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.108198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.108493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.108525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.108658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.108690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.108923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.108955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.109168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.109200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.109504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.109536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.109760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.109791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.110079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.110110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.110323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.110356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.110578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.110610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.110925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.110956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.111180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.111217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.111530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.111562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.111829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.111861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.112098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.112130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.112398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.112431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.112670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.112702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.112993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.113027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.113324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.113358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.113588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.113620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.113825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.113857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.114144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.114176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.114414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.114447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.114715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.114746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.114890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.115163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.115196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.115523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.115556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.115855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.115887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.116106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.116138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.116430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.116463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.116713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.116744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.116978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.117010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.117349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.117381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.739 [2024-07-15 11:39:46.117687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.739 [2024-07-15 11:39:46.117718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.739 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.118032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.118063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.118378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.118411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.118683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.118715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.119049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.119080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.119326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.119360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.119655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.119688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.119903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.119934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.120133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.120164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.120329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.120363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.120597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.120628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.120917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.120949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.121077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.121109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.121424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.121458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.121746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.121779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.122075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.122107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.122280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.122530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.122563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.122800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.122836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.123121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.123152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.123441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.123475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.123771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.123803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.124044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.124076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.124367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.124400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.124690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.124722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.125015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.125048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.125275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.125309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.125533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.125565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.125794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.125826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.126100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.126132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.126425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.126458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.126749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.126781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.127075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.127108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.127399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.127432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.127648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.127681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.127875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.127907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.128053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.128085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.128283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.128316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.128608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.128640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.128952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.128983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.129194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.129237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.129501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.129533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.129756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.129788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.129987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.130018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.130283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.130317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.130588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.130621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.130934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.130966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.131240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.131274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.131482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.131514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.131728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.131760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.131974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.132005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.132270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.132303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.132513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.132545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.132819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.132851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.133167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.133198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.133524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.133557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.133826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.133858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.133996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.134027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.134312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.134351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.134518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.134549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.134854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.134885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.135084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.135117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.135414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.135448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.135666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.135698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.135842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.135873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.136092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.136124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.136319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.136368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.136640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.136673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.136889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.136920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.137188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.137219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.137447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.137480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.137753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.137785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.740 [2024-07-15 11:39:46.138012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.740 [2024-07-15 11:39:46.138045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.740 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.138285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.138319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.138609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.138641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.138861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.138894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.139159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.139192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.139401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.139434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.139703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.139734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.140032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.140064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.140208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.140250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.140490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.140522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.140745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.140778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.141053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.141085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.141311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.141344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.141622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.141655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.141955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.141987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.142194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.142245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.142523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.142554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.142842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.142874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.143034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.143066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.143310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.143344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.143472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.143504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.143819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.143851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.144001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.144033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.144320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.144353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.144667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.144951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.144984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.145252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.145290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.145505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.145537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.145744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.145775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.145990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.146021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.146216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.146559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.146591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.146835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.146867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.147139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.147171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.147477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.147510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.147750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.147782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.148053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.148085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.148353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.148387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.148664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.148695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.148964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.148996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.149313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.149616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.149647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.149976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.150008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.150206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.150246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.150538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.150570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.150718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.150750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.151042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.151074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.151240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.151273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.151562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.151594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.151881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.151913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.152204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.152248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.152535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.152568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.152853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.152885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.153051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.153084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.153372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.153406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.153553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.153585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.153879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.153911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.154114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.154147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.154354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.154386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.154602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.154634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.154877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.154908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.155193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.155235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.155525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.155557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.155693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.155723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.155963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.155995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.156282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.156315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.156603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.156640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.156797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.156830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.157065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.157097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.157247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.157281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.157501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.157533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.157800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.157832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.157978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.158010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.158297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.158332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.158573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.158605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.158878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.159233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.741 [2024-07-15 11:39:46.159267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.741 qpair failed and we were unable to recover it. 00:29:02.741 [2024-07-15 11:39:46.159541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.159574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.159865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.159897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.160191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.160223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.160498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.160530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.160751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.160783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.160999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.161030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.161320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.161354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.161508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.161540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.161827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.161858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.162184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.162216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.162428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.162461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.162752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.162785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.163064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.163096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.163295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.163329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.163623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.163655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.163797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.163829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.164077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.164110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.164381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.164414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.164712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.164744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.164950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.164982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.165179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.165211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.165504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.165537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.165699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.165731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.165934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.165966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.166261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.166295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.166585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.166618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.166818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.166849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.167143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.167174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.167475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.167508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.167793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.167825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.168122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.168154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.168371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.168405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.168637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.168669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.168872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.168905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.169110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.169142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.169433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.169466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.169778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.169810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.170120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.170151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.170383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.170416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.170641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.170673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.170986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.171018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.171294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.171328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.171626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.171658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.171938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.171970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.172185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.172494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.172526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.172731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.172763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.173040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.173072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.173339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.173373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.173585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.173617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.173905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.173937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.174206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.174256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.174505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.174535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.174850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.174881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.175040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.175072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.175347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.175380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.175550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.175587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.175797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.175830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.176046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.176079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.176285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.176318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.176514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.176547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.176823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.176855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.177130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.177163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.177484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.177517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.177781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.177813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.178030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.178063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.178339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.178372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.178519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.178551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.178718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.178750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.179034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.179067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.742 [2024-07-15 11:39:46.179379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.742 [2024-07-15 11:39:46.179412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.742 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.179565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.179597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.179847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.179879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.180108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.180140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.180380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.180413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.180683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.180716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.180912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.180945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.181158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.181189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.181428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.181461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.181626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.181659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.181905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.181937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.182151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.182183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.182401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.182435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.182667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.182699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.182876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.182908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.183198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.183243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.183512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.183543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.183815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.183847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.184058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.184090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.184383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.184416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.184633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.184665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.184884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.184916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.185153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.185184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.185396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.185428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.185705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.185737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.186031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.186062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.186398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.186438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.186656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.186688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.186890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.186921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.187242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.187494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.187526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.187727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.187759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.188049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.188080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.188372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.188406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.188623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.188655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.188953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.188985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.189223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.189268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.189493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.189525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.189718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.189751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.189906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.189939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.190157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.190189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.190508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.190542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.190705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.190736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.190934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.190966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.191246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.191278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.191519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.191550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.191757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.191789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.192010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.192042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.192329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.192363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.192565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.192597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.192766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.192798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.193081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.193114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.193368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.193400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.193570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.193602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.193800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.193831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.194088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.194120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.194391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.194425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.194691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.194723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.194998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.195031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.195172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.195204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.195416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.195450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.195737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.195769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.195999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.196030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.196302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.196336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.196654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.196685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.196933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.196965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.197244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.197283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.197558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.197591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.197895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.197926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.198267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.198300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.198521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.198554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.198843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.198874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.199140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.199172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.199478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.199512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.199809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.199840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.200150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.200182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.200440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.743 [2024-07-15 11:39:46.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.743 qpair failed and we were unable to recover it. 00:29:02.743 [2024-07-15 11:39:46.200761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.200792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.200992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.201024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.201240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.201273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.201476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.201509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.201717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.201749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.201982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.202013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.202210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.202253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.202571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.202603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.202881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.202912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.203124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.203155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.203314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.203348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.203645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.203677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.203986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.204018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.204332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.204365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.204580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.204611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.204755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.204787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.205077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.205108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.205347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.205380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.205665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.205698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.206019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.206050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.206276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.206310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.206532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.206565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.206695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.206726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.206953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.206983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.207296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.207330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.207602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.207634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.207831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.207863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.208011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.208043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.208166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.208198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.208501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.208540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.208759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.208791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.209079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.209110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.209264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.209296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.209519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.209549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.209861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.209893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.210112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.210143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.210345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.210377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.210577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.210609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.210768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.210800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.211080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.211112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.211322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.211355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.211620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.211650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.211851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.211882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.212107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.212139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.212413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.212446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.212640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.212672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.212964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.212995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.213313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.213345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.213605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.213637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.213846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.213877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.214077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.214110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.214380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.214414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.214583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.214614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.214851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.214882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.215179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.215211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.215432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.215464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.215771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.215803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.216031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.216062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.216358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.216391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.216543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.216575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.216866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.216897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.217026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.217059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.217287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.217319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.217534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.217565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.217806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.217838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.218052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.218084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.218289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.218322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.218594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.218626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.218895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.744 [2024-07-15 11:39:46.218926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.744 qpair failed and we were unable to recover it. 00:29:02.744 [2024-07-15 11:39:46.219172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.219208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.219443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.219476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.219740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.219772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.220016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.220281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.220315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.220479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.220512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.220730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.220762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.221095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.221331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.221364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.221661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.221693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.221847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.221878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.222039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.222070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.222274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.222307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.222553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.222787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.222819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.223088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.223120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.223405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.223439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.223592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.223624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.223824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.223856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.224150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.224182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.224422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.224455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.224670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.224702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.224913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.224944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.225264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.225297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.225564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.225596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.225910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.225941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.226160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.226192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.226545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.226578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.226801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.226833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.227031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.227063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.227314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.227347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.227545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.227576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.227778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.227811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.228050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.228082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.228218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.228260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.228414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.228445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.228737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.228769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.229096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.229128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.229425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.229459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.229660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.229691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.229902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.229940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.230240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.230274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.230552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.230584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.230855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.230886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.231145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.231176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.231308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.231341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.231544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.231576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.231845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.231877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.232155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.232186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.232492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.232526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.232801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.232833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.232979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.233011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.233247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.233281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.233502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.233534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.233775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.234098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.234129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.234266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.234300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.234614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.234645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.234773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.234805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.235024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.235056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.235280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.235312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.235533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.235564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.235766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.235798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.235997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.236028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.236324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.236356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.236503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.236536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.236678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.236710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.237028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.237060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.237201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.237244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.237535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.237567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.237768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.237799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.238017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.238326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.238360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.238655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.238688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.238830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.238861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.238995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.239027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.239318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.239351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.239579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.745 [2024-07-15 11:39:46.239610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.745 qpair failed and we were unable to recover it. 00:29:02.745 [2024-07-15 11:39:46.239819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.240065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.240097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.240299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.240336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.240629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.240661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.240947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.240979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.241184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.241215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.241534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.241567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.241812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.241843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.242134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.242166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.242465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.242497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.242784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.242816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.243112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.243144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.243347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.243381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.243671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.243702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.243846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.243878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.244019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.244050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.244288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.244322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.244527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.244558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.244798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.244830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.245048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.245081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.245319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.245352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.245567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.245600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.245752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.245784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.246108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.246139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.246411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.246444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.246592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.246624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.246773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.246804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.246944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.246976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.247266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.247300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.247599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.247631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.247847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.247879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.248172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.248203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.248437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.248470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.248690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.248721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.248989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.249021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.249320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.249353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.249595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.249626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.249917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.249949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.250170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.250201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.250513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.250545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.250754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.250785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.250931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.250962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.251253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.251291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.251599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.251631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.251777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.251809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.252012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.252044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.252311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.252344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.252625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.252657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.252960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.252991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.253278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.253311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.253548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.253579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.253870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.253901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.254201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.254244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.254447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.254479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.254796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.255098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.255130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.255367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.255401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.255674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.255705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.255998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.256030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.256323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.256356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.256647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.256679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.256971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.257004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.257203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.257243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.257513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.257545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.257844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.257875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.258039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.258070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.258285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.258320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.258539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.258571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.746 [2024-07-15 11:39:46.258784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.746 [2024-07-15 11:39:46.258816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.746 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.259104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.259136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.259372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.259404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.259700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.259731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.259933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.259964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.260184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.260215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.260541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.260574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.260840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.260872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.261066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.261098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.261296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.261328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.261540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.261571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.261851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.261881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.262151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.262182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.262429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.262461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.262616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.262653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.262950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.262981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.263120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.263151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:02.747 [2024-07-15 11:39:46.263284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.747 [2024-07-15 11:39:46.263316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:02.747 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.263561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.263594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.263882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.263913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.264113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.264144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.264452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.264484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.264762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.264794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.265004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.265035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.265172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.265204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.265376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.265409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.265588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.265620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.265914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.265946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.266177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.266209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.266495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.266527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.266748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.266783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.266936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.267266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.267302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.267451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.267483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.267781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.267813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.268012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.268044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.268193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.268224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.268447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.268479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.268753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.025 [2024-07-15 11:39:46.268784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.025 qpair failed and we were unable to recover it. 00:29:03.025 [2024-07-15 11:39:46.269020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.269052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.269267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.269300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.269598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.269629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.269910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.269942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.270161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.270193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.270545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.270578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.270870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.270901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.271139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.271481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.271514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.271807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.271838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.272054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.272086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.272284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.272318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.272625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.272657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.272868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.272899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.273181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.273212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.273526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.273564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.273784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.273817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.274032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.274064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.274248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.274280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.274478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.274509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.274741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.274773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.275088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.275120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.275399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.275432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.275681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.275713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.276027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.276059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.276255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.276290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.276440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.276472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.276674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.276706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.276851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.276882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.277174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.277206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.277355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.277388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.277673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.277705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.277835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.277867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.278009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.278040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.278276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.278309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.278555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.278587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.278822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.278853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.279057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.279090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.279312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.279344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.279582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.026 [2024-07-15 11:39:46.279614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.026 qpair failed and we were unable to recover it. 00:29:03.026 [2024-07-15 11:39:46.279817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.279849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.280047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.280078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.280371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.280405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.280623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.280654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.280968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.281000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.281217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.281260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.281435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.281467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.281681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.281713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.281987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.282018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.282245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.282278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.282454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.282486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.282697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.282730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.282935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.282966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.283178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.283210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.283455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.283489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.283688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.283724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.283932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.284179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.284210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.284379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.284411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.284611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.284643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.284842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.284873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.285167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.285199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.285453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.285484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.285770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.285801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.286031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.286264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.286299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.286506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.286536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.286758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.286939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.286970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.287248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.287280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.287480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.287511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.287761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.287793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.288091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.288122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.288338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.288372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.288588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.288619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.288833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.288864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.289079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.289111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.289324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.289358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.289571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.027 [2024-07-15 11:39:46.289602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.027 qpair failed and we were unable to recover it. 00:29:03.027 [2024-07-15 11:39:46.289918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.289949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.290223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.290275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.290486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.290518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.290807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.290840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.290986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.291017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.291247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.291280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.291414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.291445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.291732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.291763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.291983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.292243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.292278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.292510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.292541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.292814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.292846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.293112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.293144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.293417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.293451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.293663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.293695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.293966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.293997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.294203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.294266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.294608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.294640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.294905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.294937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.295180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.295212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.295493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.295727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.295758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.296049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.296080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.296311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.296344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.296631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.296662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.296979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.297010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.297287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.297320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.297523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.297554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.297764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.297796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.298015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.298047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.298324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.298358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.298678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.298710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.298936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.298967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.299262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.299296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.299585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.299616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.299881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.299913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.300249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.300283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.300597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.300629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.300893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.028 [2024-07-15 11:39:46.300925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.028 qpair failed and we were unable to recover it. 00:29:03.028 [2024-07-15 11:39:46.301193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.301238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.301542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.301573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.301863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.301895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.302140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.302172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.302327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.302361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.302532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.302563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.302781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.302813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.303087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.303386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.303419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.303722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.303753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.304039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.304071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.304216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.304266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.304474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.304506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.304667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.304699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.304943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.304974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.305144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.305175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.305392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.305425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.305716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.305754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.305960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.305992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.306139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.306171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.306459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.306492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.306705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.306736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.307005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.307036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.307252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.307489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.307522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.307793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.307824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.308093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.308124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.308414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.308447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.308663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.308695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.308916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.308948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.309263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.309297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.309605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.309637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.029 [2024-07-15 11:39:46.309837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.029 [2024-07-15 11:39:46.309869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.029 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.310105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.310137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.310429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.310464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.310639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.310670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.310916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.310947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.311222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.311266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.311561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.311593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.311919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.311951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.312251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.312284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.312573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.312605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.312802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.312833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.313124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.313156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.313428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.313472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.313740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.313772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.314070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.314102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.314338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.314372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.314583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.314615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.314905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.314937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.315137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.315168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.315474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.315506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.315803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.315835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.316124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.316156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.316459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.316492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.316772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.316804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.317045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.317076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.317368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.317400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.317696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.317729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.318019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.318264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.318297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.318616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.318647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.318781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.318813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.319023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.319054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.319324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.319357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.319612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.319643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.319853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.319884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.320165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.320197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.320452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.320484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.320684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.320716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.320957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.320990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.321316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.321350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.030 [2024-07-15 11:39:46.321560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.030 [2024-07-15 11:39:46.321592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.030 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.321870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.321901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.322188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.322220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.322520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.322553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.322838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.322870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.323169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.323200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.323425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.323457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.323738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.323770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.324092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.324123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.324287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.324319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.324601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.324634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.324931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.324964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.325243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.325283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.325578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.325610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.325827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.325859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.326062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.326094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.326397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.326430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.326714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.326945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.326978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.327257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.327290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.327416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.327448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.327680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.327712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.327868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.327900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.328111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.328143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.328447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.328480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.328761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.328793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.329071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.329104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.329382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.329415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.329698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.329730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.330032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.330064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.330276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.330309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.330514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.330814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.330846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.331057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.331089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.331381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.331619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.331651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.331914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.331947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.332158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.332190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.332501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.332534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.332762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.332793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.333074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.333105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.031 [2024-07-15 11:39:46.333395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.031 [2024-07-15 11:39:46.333428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.031 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.333723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.333756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.334070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.334101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.334411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.334445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.334663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.334695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.334960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.334992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.335192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.335235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.335451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.335483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.335800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.335832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.336117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.336149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.336305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.336338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.336606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.336643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.336945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.337259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.337291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.337491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.337523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.337742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.337773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.338088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.338119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.338317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.338351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.338576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.338608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.338875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.338905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.339248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.339281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.339573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.339605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.339894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.339926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.340196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.340238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.340455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.340488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.340771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.340803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.341023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.341055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.341357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.341390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.341608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.341640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.341864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.341896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.342141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.342173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.342505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.342758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.342790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.343086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.343118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.343416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.343450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.343732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.343764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.344056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.344088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.344395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.344442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.344750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.344782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.344926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.344958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.345201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.345241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-07-15 11:39:46.345534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.032 [2024-07-15 11:39:46.345566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.345851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.345883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.346154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.346185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.346428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.346462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.346760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.346791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.347079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.347110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.347383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.347417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.347717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.347748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.348034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.348066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.348295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.348327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.348529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.348565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.348886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.348918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.349201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.349243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.349526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.349558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.349755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.349787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.349936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.349968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.350264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.350296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.350509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.350541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.350751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.350784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.351098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.351130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.351425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.351459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.351670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.351701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.351969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.352000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.352271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.352623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.352655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.352946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.352978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.353269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.353302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.353592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.353623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.353918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.353951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.354247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.354280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.354511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.354542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.354819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.354850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.355131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.355163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.355445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.355478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.355773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.355805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.356042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.356073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.356340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.356374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.356583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.356615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.356931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.356963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.357255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.357288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.357499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.033 [2024-07-15 11:39:46.357530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-07-15 11:39:46.357762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.357794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.358074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.358106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.358373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.358406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.358722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.358754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.358970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.359002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.359200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.359242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.359535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.359567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.359818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.359850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.360145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.360177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.360394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.360431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.360638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.360670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.360882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.360914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.361077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.361406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.361439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.361678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.361709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.361982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.362014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.362296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.362330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.362634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.362666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.362946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.362978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.363197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.363239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.363537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.363796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.363828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.364058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.364091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.364375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.364408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.364637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.364669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.364873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.364904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.365103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.365135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.365344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.365378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.365578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.365610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.365927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.365958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.366250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.366579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.366611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.366902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.366934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.367093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.367125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-07-15 11:39:46.367339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.034 [2024-07-15 11:39:46.367372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.367577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.367609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.367881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.368120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.368152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.368428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.368461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.368732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.368763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.369057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.369090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.369370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.369403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.369728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.369760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.369958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.369990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.370280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.370314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.370614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.370646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.370864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.370895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.371119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.371151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.371428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.371462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.371591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.371632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.371901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.371933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.372247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.372281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.372498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.372530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.372824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.372856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.373174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.373207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.373425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.373457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.373600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.373632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.373827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.373859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.374126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.374158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.374376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.374409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.374623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.374654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.374853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.374883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.375206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.375254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.375579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.375610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.375829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.375861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.376151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.376182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.376479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.376512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.376798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.376830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.377121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.377153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.377353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.377386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.377609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.377640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.377955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.377987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.378197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.378240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.378528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.378560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.378730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.378762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.035 [2024-07-15 11:39:46.379051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.035 [2024-07-15 11:39:46.379083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.035 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.379405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.379439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.379754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.379785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.380092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.380124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.380356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.380389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.380612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.380643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.380930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.380961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.381263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.381297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.381579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.381611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.381810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.381842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.382128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.382159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.382359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.382391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.382661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.382692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.382914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.382945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.383247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.383286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.383502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.383535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.383673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.383705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.383858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.383889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.384174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.384205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.384508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.384825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.384857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.385065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.385097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.385409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.385441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.385714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.385746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.385994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.386025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.386295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.386327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.386567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.386598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.386826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.386858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.387159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.387191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.387515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.387842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.387874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.388143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.388175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.388402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.388435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.388707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.388739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.389028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.389060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.389356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.389390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.389605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.389638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.389949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.389980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.390252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.390285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.390484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.390515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.390682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.390713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-07-15 11:39:46.390933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.036 [2024-07-15 11:39:46.390965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.391186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.391217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.391487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.391519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.391829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.391860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.392163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.392195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.392479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.392511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.392730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.392762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.393050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.393082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.393373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.393406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.393701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.393732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.393941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.393972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.394270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.394302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.394519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.394821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.394858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.395169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.395200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.395506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.395538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.395819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.395851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.396149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.396182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.396481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.396514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.396808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.396838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.397088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.397118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.397444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.397478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.397749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.397780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.398076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.398107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.398426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.398459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.398760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.398791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.398991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.399024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.399179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.399211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.399450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.399483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.399704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.399734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.400027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.400059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.400355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.400389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.400601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.400633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.400859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.401184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.401215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.401521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.401554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.401837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.401868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.402083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.402115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.402326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.402360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.402652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.402683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.402840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.402872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-07-15 11:39:46.403141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.037 [2024-07-15 11:39:46.403172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.403397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.403430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.403573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.403604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.403805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.403837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.404151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.404182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.404447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.404479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.404726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.404757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.404971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.405002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.405202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.405243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.405537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.405570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.405811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.405841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.406058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.406090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.406379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.406417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.406640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.406673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.406901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.406933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.407081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.407113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.407404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.407437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.407637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.407669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.407949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.407980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.408269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.408302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.408603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.408635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.408916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.408947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.409146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.409177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.409500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.409534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.409806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.409837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.410139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.410170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.410455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.410490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.410622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.410654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.410925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.410957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.411183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.411214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.411450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.411482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.411619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.411651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.411920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.411952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.412079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.412110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.412381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.412415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-07-15 11:39:46.412641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.038 [2024-07-15 11:39:46.412672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.412832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.412863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.413158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.413189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.413418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.413451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.413725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.413756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.414054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.414085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.414307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.414340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.414568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.414599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.414742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.414775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.414981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.415012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.415259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.415292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.415591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.415622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.415841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.415873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.416110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.416142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.416432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.416466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.416759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.416791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.417085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.417117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.417408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.417446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.417739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.417770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.418062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.418093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.418424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.418457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.418678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.418710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.418998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.419031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.419350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.419383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.419651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.419682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.419905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.419937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.420247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.420281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.420524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.420556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.420804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.420836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.421131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.421162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.421410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.421443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.421738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.421770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.421985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.422017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.422234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.422267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.422584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.422615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.422861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.422893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.423167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.423199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.423506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.423537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.423682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.424004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.424035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.424318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.424351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.039 [2024-07-15 11:39:46.424565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.039 [2024-07-15 11:39:46.424597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.039 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.424901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.424933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.425218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.425259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.425482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.425514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.425715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.425746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.426045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.426076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.426378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.426411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.426628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.426659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.426888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.426919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.427194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.427242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.427539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.427571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.427870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.427903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.428186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.428217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.428472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.428505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.428707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.428738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.428979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.429010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.429209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.429555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.429587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.429808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.429839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.430107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.430138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.430392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.430426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.430747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.430779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.431034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.431066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.431381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.431415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.431692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.431723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.431935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.431967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.432247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.432281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.432483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.432515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.432803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.432834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.433152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.433183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.433347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.433381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.433700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.433731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.434059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.434091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.434362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.434395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.434694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.434725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.435012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.435045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.435343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.435376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.435530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.435562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.435830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.435862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.436170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.436202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.040 [2024-07-15 11:39:46.436487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.040 [2024-07-15 11:39:46.436520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.040 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.436823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.436854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.437135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.437166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.437399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.437433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.437726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.437758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.437977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.438009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.438298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.438332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.438629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.438661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.438973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.439004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.439291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.439324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.439622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.439655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.439941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.439973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.440268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.440301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.440591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.440623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.440842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.440874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.441167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.441199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.441419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.441457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.441748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.441779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.442094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.442126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.442261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.442295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.442612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.442643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.442882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.442914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.443255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.443288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.443894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.443926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.444234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.444267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.444490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.444522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.444726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.444758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.445072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.445104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.445304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.445337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.445645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.445677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.445839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.445871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.446157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.446189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.446402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.446435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.446651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.446682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.446977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.447008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.447301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.447335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.447549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.447581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.447796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.447828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.448063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.448095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.448307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.448351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.041 qpair failed and we were unable to recover it. 00:29:03.041 [2024-07-15 11:39:46.448519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.041 [2024-07-15 11:39:46.448552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.448801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.448832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.449162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.449196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.449426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.449458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.449766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.449797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.450079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.450112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.450407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.450439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.450661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.450692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.450982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.451012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.451312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.451345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.451631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.451664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.451879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.451910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.452188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.452220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.452498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.452530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.452746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.452777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.452927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.452964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.453287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.453319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.453485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.453516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.453717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.453749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.454017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.454049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.454333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.454366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.454608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.454640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.454932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.454963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.455258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.455292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.455580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.455612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.455860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.455891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.456086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.456117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.456258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.456291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.456497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.456527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.456836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.456868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.457163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.457194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.457486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.457518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.457671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.457703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.457844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.457875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.458166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.458198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.042 [2024-07-15 11:39:46.458487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.042 [2024-07-15 11:39:46.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.042 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.458716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.459023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.459054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.459276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.459310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.459602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.459633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.459926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.459957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.460259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.460292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.460527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.460565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.460900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.460932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.461222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.461263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.461576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.461608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.461913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.461945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.462233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.462267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.462541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.462573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.462871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.462902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.463124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.463156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.463442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.463774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.463805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.464047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.464079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.464352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.464385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.464673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.464705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.465001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.465033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.465330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.465363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.465650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.465950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.465980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.466181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.466213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.466421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.466453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.466721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.466752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.467037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.467068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.467310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.467344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.467564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.467596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.467829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.467860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.468128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.468159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.468390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.468423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.468645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.468677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.469211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.469252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.469451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.469483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.469772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.470118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.470149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.470360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.043 [2024-07-15 11:39:46.470394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.043 qpair failed and we were unable to recover it. 00:29:03.043 [2024-07-15 11:39:46.470671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.470703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.470998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.471030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.471301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.471334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.471631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.471662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.471977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.472009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.472295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.472328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.472622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.472659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.472947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.472978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.473197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.473238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.473533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.473565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.473849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.473880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.474172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.474203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.474499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.474530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.474827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.474859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.475094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.475125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.475412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.475444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.475610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.475641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.475934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.475965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.476245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.476278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.476482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.476514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.476787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.476818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.477037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.477069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.477360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.477393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.477538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.477570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.477837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.477868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.478127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.478159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.478356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.478387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.478619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.478650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.478871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.478902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.479120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.479150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.479443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.479476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.479648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.479680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.479876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.479908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.480204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.480258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.480552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.480584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.480825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.480856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.481129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.481160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.481398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.481431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.481702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.481735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.482002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.482033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.482301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.044 [2024-07-15 11:39:46.482334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.044 qpair failed and we were unable to recover it. 00:29:03.044 [2024-07-15 11:39:46.482625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.482656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.482953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.482984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.483277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.483311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.483509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.483542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.483759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.483790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.484014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.484051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.484345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.484378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.484594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.484626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.484918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.484950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.485265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.485299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.485564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.485596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.485867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.485899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.486167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.486198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.486512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.486545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.486849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.486881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.487086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.487118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.487254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.487288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.487557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.487588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.487903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.487934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.488251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.488284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.488556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.488588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.488828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.488860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.489070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.489102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.489400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.489434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.489705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.489738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.490045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.490077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.490355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.490389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.490659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.490690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.490918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.490951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.491221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.491268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.491481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.491513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.491716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.491748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.491969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.492001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.492213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.492257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.492528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.492560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.492855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.492886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.493123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.493155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.493399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.493433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.493647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.493679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.493975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.494005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.045 [2024-07-15 11:39:46.494206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.045 [2024-07-15 11:39:46.494248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.045 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.494564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.494596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.494880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.494912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.495242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.495274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.495485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.495517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.495785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.495821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.496115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.496146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.496350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.496384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.496607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.496640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.496853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.496885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.497152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.497184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.497393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.497425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.497705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.497737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.498013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.498045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.498266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.498299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.498513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.498544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.498756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.498789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.499006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.499037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.499306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.499338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.499648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.499680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.499904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.499935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.500161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.500193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.500411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.500443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.500763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.500795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.500990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.501022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.501220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.501275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.501419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.501452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.501744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.501776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.501994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.502026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.502296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.502330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.502558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.502590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.502864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.502896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.503136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.503168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.503494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.503527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.503829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.503860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.504142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.504174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.504453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.504487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.504786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.504818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.046 qpair failed and we were unable to recover it. 00:29:03.046 [2024-07-15 11:39:46.505023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.046 [2024-07-15 11:39:46.505055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.505264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.505296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.505586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.505617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.505830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.505862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.506061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.506093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.506303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.506337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.506564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.506596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.506888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.506925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.507140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.507171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.507471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.507504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.507666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.507698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.507993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.508025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.508246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.508279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.508511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.508543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.508779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.508811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.509106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.509138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.509427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.509460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.509753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.509785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.509940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.509973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.510287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.510319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.510611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.510643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.510855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.510887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.511176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.511208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.511505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.511538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.511742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.511774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.512092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.512124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.512326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.512360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.512560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.512591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.512813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.512844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.513113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.513145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.513388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.513422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.513662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.513694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.513825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.513857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.514126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.514158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.514454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.514488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.514804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.514836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.515128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.515159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.515383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.515417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.047 [2024-07-15 11:39:46.515708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.047 [2024-07-15 11:39:46.515740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.047 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.516035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.516067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.516222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.516264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.516572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.516604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.516912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.516944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.517160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.517192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.517518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.517552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.517869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.517901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.518043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.518074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.518295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.518334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.518632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.518664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.518950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.518981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.519275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.519308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.519511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.519542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.519844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.519876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.520077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.520109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.520350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.520383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.520621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.520653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.520866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.520898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.521187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.521219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.521465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.521497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.521741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.521773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.521917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.521948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.522196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.522236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.522369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.522692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.522724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.523018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.523050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.523305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.523592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.523624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.523825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.523856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.524154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.524184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.524410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.524443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.524731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.524762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.524968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.524999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.525208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.525249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.525398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.525430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.525561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.525592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.525836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.525867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.526158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.526189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.526505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.526538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.526838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.526869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.048 [2024-07-15 11:39:46.527158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.048 [2024-07-15 11:39:46.527189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.048 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.527489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.527522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.527763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.527795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.528097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.528130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.528415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.528448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.528721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.528753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.529029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.529061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.529374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.529407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.529674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.529710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.529980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.530011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.530254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.530286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.530558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.530590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.530858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.530889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.531158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.531189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.531414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.531447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.531697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.531728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.532023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.532055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.532202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.532245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.532511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.532812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.532843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.533169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.533200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.533509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.533541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.533757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.533788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.533989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.534020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.534339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.534372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.534664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.534695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.534989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.535021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.535268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.535301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.535598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.535628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.535773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.535804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.536022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.536054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.536377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.536409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.536569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.536601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.536802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.536833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.537033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.537064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.537287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.537319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.537526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.537558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.537773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.537804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.537933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.537964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.538240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.538273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.538486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.538518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.049 qpair failed and we were unable to recover it. 00:29:03.049 [2024-07-15 11:39:46.538809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.049 [2024-07-15 11:39:46.538841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.539110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.539142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.539377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.539411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.539627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.539660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.539931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.539962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.540182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.540214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.540447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.540480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.540693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.540731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.541056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.541088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.541302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.541334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.541547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.541579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.541798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.541831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.542051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.542082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.542284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.542317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.542540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.542572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.542795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.542827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.543127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.543159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.543447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.543481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.543709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.543740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.543961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.543992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.544265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.544298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.544619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.544651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.544923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.544954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.545246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.545279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.545569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.545601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.545892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.545924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.546139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.546171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.546467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.546499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.546733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.546764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.547033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.547064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.547280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.547313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.547562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.547594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.547796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.547828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.548110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.548141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.548439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.548472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.548685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.548716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.549012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.549044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.549355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.549388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.549694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.549726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.549943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.549974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.550270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.050 [2024-07-15 11:39:46.550303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.050 qpair failed and we were unable to recover it. 00:29:03.050 [2024-07-15 11:39:46.550507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.550540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.550863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.550895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.551187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.551218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.551517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.551549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.551773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.551804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.551947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.551978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.552119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.552156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.552408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.552452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.552783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.552982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.553013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.553245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.553277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.553539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.553570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.553806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.553838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.554037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.554068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.554337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.554371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.554642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.554674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.555006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.555037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.555251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.555284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.555551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.555583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.555807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.555839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.556133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.556165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.556478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.556512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.556765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.556797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.557123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.557155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.557358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.557393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.557614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.557646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.557915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.557947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.558213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.558255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.558565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.558597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.558851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.558883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.559096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.559128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.559372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.559405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.559671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.559704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.559967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.560000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.051 qpair failed and we were unable to recover it. 00:29:03.051 [2024-07-15 11:39:46.560312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.051 [2024-07-15 11:39:46.560345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.560589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.560620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.560786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.561044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.561075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.561276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.561309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.561602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.561633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.561910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.561941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.562243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.562276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.562489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.562521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.562809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.562841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.563167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.563198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.563487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.563519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.563812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.563849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.564068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.564099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.564322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.564356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.564647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.564680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.564972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.565005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.565238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.565272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.565557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.565588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.565806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.565838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.566080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.566112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.566386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.566420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.566622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.566655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.566924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.566956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.567256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.567289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.567513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.567544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.567844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.567876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.568164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.568196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.568447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.568479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.568721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.568753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.569021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.569053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.569354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.569387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.569673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.569704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.569868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.569900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.570125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.570400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.570433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.570731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.570764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.571050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.571082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.571364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.571633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.571665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.571965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.052 [2024-07-15 11:39:46.571997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.052 qpair failed and we were unable to recover it. 00:29:03.052 [2024-07-15 11:39:46.572304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.572337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.572623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.572655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.572949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.572981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.573292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.573326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.573618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.573650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.573846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.573878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.574148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.574180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.574498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.574531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.574825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.574857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.575149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.575180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.575434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.575467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.575691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.575727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.575947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.575978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.576196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.576546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.576577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.576869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.576901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.577127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.577158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.577389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.577422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.577726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.577758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.578042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.578073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.578298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.578330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.578530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.578562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.578762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.578794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.579089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.579120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.579332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.579365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.579578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.579610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.579877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.579908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.580109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.580140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.580423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.580456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.580660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.580692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.580990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.581020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.581293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.581326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.581595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.581626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.581863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.581894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.582092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.582124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.582439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.582472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.582740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.582771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.583090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.583121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.583370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.583405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.583694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.583725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.053 [2024-07-15 11:39:46.584040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.053 [2024-07-15 11:39:46.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.053 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.584373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.584407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.584668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.584699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.584986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.585017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.585242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.585275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.585528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.585559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.585685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.585717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.585933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.585964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.586239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.586271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.586404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.586437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.586724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.586755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.586950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.586988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.587250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.587284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.587504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.587535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.587828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.587860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.588156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.588187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.588495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.588742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.588773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.589041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.589072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.589204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.589246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.589473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.589504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.589802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.589833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.590130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.590161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.590382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.590415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.590702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.590734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.591035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.591066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.591308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.591341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.591612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.591643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.591949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.591981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.592186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.592217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.592543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.592572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.592780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.592812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.593128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.593158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.593303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.593336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.593623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.593655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.593821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.593852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.594091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.594122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.594410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.594444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.594691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.594722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.054 qpair failed and we were unable to recover it. 00:29:03.054 [2024-07-15 11:39:46.594936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-07-15 11:39:46.594968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.055 [2024-07-15 11:39:46.595301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-07-15 11:39:46.595334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.055 [2024-07-15 11:39:46.595627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-07-15 11:39:46.595658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.055 [2024-07-15 11:39:46.595788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-07-15 11:39:46.595819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.055 [2024-07-15 11:39:46.596134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-07-15 11:39:46.596165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.055 [2024-07-15 11:39:46.596467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-07-15 11:39:46.596501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.055 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.596788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.596822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.596979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.597011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.597304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.597337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.597571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.597602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.597822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.597853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.598096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.598128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.598395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.598432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.598651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.598684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.598962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.598994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.599263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.599295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.599499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.599530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.599817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.599848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.600045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.600078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.600253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.600286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.600446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.600478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.600716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.600747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.601050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.601081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.601371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.601404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.601692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.601723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.601955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.601987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.602198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.602249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.602601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.602632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.602921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.602952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.603254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.603287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.603506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.603537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.603782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.603813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.604136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.604167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.604401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.604434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.604713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.604745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.605063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.605094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.605267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.605299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.605463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.605494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.605788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.333 [2024-07-15 11:39:46.605820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.333 qpair failed and we were unable to recover it. 00:29:03.333 [2024-07-15 11:39:46.606018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.606054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.606275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.606307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.606530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.606562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.606779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.606810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.607127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.607158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.607358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.607391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.607550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.607582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.607806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.607838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.608107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.608139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.608404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.608436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.608654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.608685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.608969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.609001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.609273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.609306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.609520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.609551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.609809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.609841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.610136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.610166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.610457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.610490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.610789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.610821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.611137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.611168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.611439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.611472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.611711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.611742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.611889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.611922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.612117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.612148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.612441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.612475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.612788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.613058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.613090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.613386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.613419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.613732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.613763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.614071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.614103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.614380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.614413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.614706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.614737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.615033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.615284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.615316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.615448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.615480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.615719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.615751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.615988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.616020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.616221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.616264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.616558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.616589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.334 qpair failed and we were unable to recover it. 00:29:03.334 [2024-07-15 11:39:46.616790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.334 [2024-07-15 11:39:46.616822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.617019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.617051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.617267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.617306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.617521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.617553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.617854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.617886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.618189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.618222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.618537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.618569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.618855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.618887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.619184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.619216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.619509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.619541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.619866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.619898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.620068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.620099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.620437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.620470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.620715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.620748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.620984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.621015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.621294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.621327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.621554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.621587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.621785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.621817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.622086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.622117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.622339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.622584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.622616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.622885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.622916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.623121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.623152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.623392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.623424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.623642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.623672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.623892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.623923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.624163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.624194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.624538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.624572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.624865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.624896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.625212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.625257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.625461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.625494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.625783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.625815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.626089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.626121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.626360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.626393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.626662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.626694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.626987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.627018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.627244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.627277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.627564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.627595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.627830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.627863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.335 [2024-07-15 11:39:46.628160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.335 [2024-07-15 11:39:46.628190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.335 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.628479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.628511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.628724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.628755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.629076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.629112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.629396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.629429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.629627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.629942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.629974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.630208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.630250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.630545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.630576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.630786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.630819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.631118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.631149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.631283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.631316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.631581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.631612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.631926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.631958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.632265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.632299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.632540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.632571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.632913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.632944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.633243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.633276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.633475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.633506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.633724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.633756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.633989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.634021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.634357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.634390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.634683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.634715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.635009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.635040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.635334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.635368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.635660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.635691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.635984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.636016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.636234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.636266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.636560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.636593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.636806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.636838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.637057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.637090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.637371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.637404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.637707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.637739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.638019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.638051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.638350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.638383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.638514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.638545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.638826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.336 [2024-07-15 11:39:46.638857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.336 qpair failed and we were unable to recover it. 00:29:03.336 [2024-07-15 11:39:46.639124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.639155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.639384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.639417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.639708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.639740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.639977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.640008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.640296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.640329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.640622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.640653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.640944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.640980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.641272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.641306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.641599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.641631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.641852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.641884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.642093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.642125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.642336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.642369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.642639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.642670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.642985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.643016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.643295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.643329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.643598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.643630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.643840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.643872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.644141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.644172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.644449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.644482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.644706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.644738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.644987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.645218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.645259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.645555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.645587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.645788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.645821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.646091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.646122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.646400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.646433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.646696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.646727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.647004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.647035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.647330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.647363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.647599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.647630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.647872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.647903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.648102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.648134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.648350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.648383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.648656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.648688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.337 [2024-07-15 11:39:46.649020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.337 [2024-07-15 11:39:46.649051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.337 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.649343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.649377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.649653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.649684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.649984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.650015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.650232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.650265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.650504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.650535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.650828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.650859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.651152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.651184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.651492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.651525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.651759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.651790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.652082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.652112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.652410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.652444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.652733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.653065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.653097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.653342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.653375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.653667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.653699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.653867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.653899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.654056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.654088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.654378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.654412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.654648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.654679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.654892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.654923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.655216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.655256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.655574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.655606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.655900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.655933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.656254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.656298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.656533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.656565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.656769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.656801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.657000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.657032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.657249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.657283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.657602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.657634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.657852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.657884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.658181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.658212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.658420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.658452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.658652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.658683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.658948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.658980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.659263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.659296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.659461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.659492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.659636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.659668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.659818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.659849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.660058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.660089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.660300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.338 [2024-07-15 11:39:46.660334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.338 qpair failed and we were unable to recover it. 00:29:03.338 [2024-07-15 11:39:46.660604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.660637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.660905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.660936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.661247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.661281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.661553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.661585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.661794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.661825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.662101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.662133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.662450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.662482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.662777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.662808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.663052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.663083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.663308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.663340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.663629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.663660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.663928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.663966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.664183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.664395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.664427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.664723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.664755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.665043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.665075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.665344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.665377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.665654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.665686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.665907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.665939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.666089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.666121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.666266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.666299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.666572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.666604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.666902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.666933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.667153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.667185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.667479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.667512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.667827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.668061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.668094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.668291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.668323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.668612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.668644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.668866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.668898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.669116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.669147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.669414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.669447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.669765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.669797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.670061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.670093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.670385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.670416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.670732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.670764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.671074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.671105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.671384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.671417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.671625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.671658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.671906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.339 [2024-07-15 11:39:46.671937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.339 qpair failed and we were unable to recover it. 00:29:03.339 [2024-07-15 11:39:46.672157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.672188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.672483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.672516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.672799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.672830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.673070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.673102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.673303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.673335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.673534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.673567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.673861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.673893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.674186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.674217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.674513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.674545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.674837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.674869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.675164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.675195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.675423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.675461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.675758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.675790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.675991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.676023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.676263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.676296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.676564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.676596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.676893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.676925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.677053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.677085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.677376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.677409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.677698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.677731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.678022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.678054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.678302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.678335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.678532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.678564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.678762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.678794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.678942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.678974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.679177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.679209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.679421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.679453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.679722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.679753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.680069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.680102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.680407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.680440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.680644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.680678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.680877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.680909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.681118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.681149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.681371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.681403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.681642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.681675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.681941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.681973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.682294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.682327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.682535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.682566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.682718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.682750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.683052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.683086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.340 [2024-07-15 11:39:46.683387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.340 [2024-07-15 11:39:46.683419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.340 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.683631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.683664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.683951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.683983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.684142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.684173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.684465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.684499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.684794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.684825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.685066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.685097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.685371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.685606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.685637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.685801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.685833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.686132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.686164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.686385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.686423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.686575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.686608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.686882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.686915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.687156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.687188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.687396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.687429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.687697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.687728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.687927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.687959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.688157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.688190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.688443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.688475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.688675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.688707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.689047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.689079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.689381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.689415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.689623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.689656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.689887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.689919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.690246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.690279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.690560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.690591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.690898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.690930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.691162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.691194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.691365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.691397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.691599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.691630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.691895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.691928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.341 [2024-07-15 11:39:46.692222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.341 [2024-07-15 11:39:46.692277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.341 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.692549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.692781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.693095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.693127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.693361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.693396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.693558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.693590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.693868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.693900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.694127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.694159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.694433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.694465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.694682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.694714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.695068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.695100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.695319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.695353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.695567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.695598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.695867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.695898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.696202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.696244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.696519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.696553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.696874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.696906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.697223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.697268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.697489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.697521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.697708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.697744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.697906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.697938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.698209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.698253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.698477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.698509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.698708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.698739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.698919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.698951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.699247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.699280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.699504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.699536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.699819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.699850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.700002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.700034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.700273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.700307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.700452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.700484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.700696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.700727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.700966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.700997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.701223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.701266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.701470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.701503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.701799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.701831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.702121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.702151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.702391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.702424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.702722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.702754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.702982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.703014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.703298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.703332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.342 qpair failed and we were unable to recover it. 00:29:03.342 [2024-07-15 11:39:46.703547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.342 [2024-07-15 11:39:46.703578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.703750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.703781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.704088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.704120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.704404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.704438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.704637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.704669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.704828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.704862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.705093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.705126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.705394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.705429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.705718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.705751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.705921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.705952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.706155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.706188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.706501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.706535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.706669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.706700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.706968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.707001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.707201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.707244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.707398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.707429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.707721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.707752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.707899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.707931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.708241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.708281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.708426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.708459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.708749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.708782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.709095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.709127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.709412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.709447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.709669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.709701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.709872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.709904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.710098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.710130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.710290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.710324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.710546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.710578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.710811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.710843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.710990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.711022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.711238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.711272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.711568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.711600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.711809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.711842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.712139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.712172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.712398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.712431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.712722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.712753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.713066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.713099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.713293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.713326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.713551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.713583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.713906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.713937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.714243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.714276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.343 qpair failed and we were unable to recover it. 00:29:03.343 [2024-07-15 11:39:46.714504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.343 [2024-07-15 11:39:46.714537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.714679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.714710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.714873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.714905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.715054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.715086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.715328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417000 is same with the state(5) to be set 00:29:03.344 [2024-07-15 11:39:46.715654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.715735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.715966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.716001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.716304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.716341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.716559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.716592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.716807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.716841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.716995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.717027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.717175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.717208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.717462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.717496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.717752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.717786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.717975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.718007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.718140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.718172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.718431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.718703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.718736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.719097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.719175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.719528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.719566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.719804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.719837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.719984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.720016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.720169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.720202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.720491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.720525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.720675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.720707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.720930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.720962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.721169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.721515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.721548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.721761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.721932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.721965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.722272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.722304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.722520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.722562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.722786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.722820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.723051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.723083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.723245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.723524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.723556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.723784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.723817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.724021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.724053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.724207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.724250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.724546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.724579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.724827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.724860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.725064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.344 [2024-07-15 11:39:46.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.344 qpair failed and we were unable to recover it. 00:29:03.344 [2024-07-15 11:39:46.725322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.725354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.725558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.725591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.725743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.725776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.726055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.726088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.726246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.726280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.726481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.726514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.726681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.726714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.726944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.726977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.727275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.727310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.727563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.727596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.727838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.727870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.728088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.728119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.728416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.728450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.728691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.728724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.728997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.729296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.729329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.729538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.729571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.729838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.729869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.730088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.730121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.730256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.730512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.730544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.730768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.730800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.731097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.731129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.731361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.731394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.731650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.731681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.731812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.731844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.732061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.732092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.732381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.732414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.732649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.732680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.732838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.732871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.733106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.733138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.733352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.733385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.733603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.733634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.733857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.733889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.345 qpair failed and we were unable to recover it. 00:29:03.345 [2024-07-15 11:39:46.734158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.345 [2024-07-15 11:39:46.734190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.734394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.734427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.734703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.734735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.734936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.734968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.735260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.735293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.735566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.735597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.735862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.735895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.736097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.736128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.736429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.736461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.736763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.736794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.736993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.737025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.737315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.737347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.737478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.737510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.737825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.737857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.738073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.738105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.738376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.738408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.738642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.738675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.738887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.738919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.739209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.739247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.739540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.739572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.739859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.739891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.740109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.740142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.740437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.740476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.740783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.740815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.740957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.740989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.741283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.741316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.741609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.741640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.741935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.741967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.742173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.742205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.742452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.742484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.742776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.742808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.743102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.743133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.743370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.743404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.743619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.743651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.743934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.743967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.744259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.744292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.744517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.744549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.744759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.744791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.745056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.745088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.745304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.745337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.346 [2024-07-15 11:39:46.745544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.346 [2024-07-15 11:39:46.745576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.346 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.745867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.745899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.746095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.746127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.746415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.746447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.746745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.746776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.747021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.747053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.747350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.747383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.747671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.747704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.748018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.748050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.748277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.748310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.748526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.748558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.748848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.748880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.749147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.749179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.749461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.749493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.749695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.750008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.750039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.750188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.750220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.750501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.750533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.750740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.750772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.750990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.751022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.751250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.751285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.751557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.751588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.751885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.751922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.752167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.752198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.752513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.752546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.752818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.752850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.753148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.753180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.753489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.753521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.753742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.753773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.753917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.753949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.754153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.754185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.754411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.754443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.754761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.754792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.755014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.755339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.755372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.755664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.755696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.756016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.756049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.756318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.756351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.756635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.756666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.756972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.757004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.347 [2024-07-15 11:39:46.757247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.347 [2024-07-15 11:39:46.757280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.347 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.757491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.757523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.757756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.757788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.758063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.758095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.758317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.758349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.758620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.758652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.758889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.758921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.759167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.759376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.759409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.759635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.759668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.759935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.759967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.760259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.760308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.760446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.760478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.760719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.760750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.760975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.761007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.761211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.761252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.761471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.761503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.761703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.761735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.761968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.762000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.762209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.762249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.762521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.762767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.762798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.763086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.763124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.763335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.763368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.763567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.763598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.763817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.763849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.764135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.764166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.764469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.764502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.764804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.764836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.765118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.765149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.765399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.765433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.765647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.765678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.765896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.765927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.766139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.766170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.766452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.766484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.766786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.766817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.767058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.767090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.767320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.767353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.767573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.767604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.767805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.767836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.768034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.768066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.348 qpair failed and we were unable to recover it. 00:29:03.348 [2024-07-15 11:39:46.768358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.348 [2024-07-15 11:39:46.768392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.768674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.768705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.768979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.769011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.769311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.769344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.769654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.769686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.769945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.769976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.770259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.770292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.770497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.770529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.770807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.770839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.771108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.771140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.771438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.771472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.771684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.771715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.771971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.772002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.772320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.772352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.772562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.772594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.772868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.772899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.773149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.773180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.773394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.773427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.773693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.773724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.773854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.773886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.774119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.774150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.774351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.774396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.774601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.774633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.774924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.774955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.775167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.775198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.775505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.775538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.775824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.775855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.776146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.776178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.776496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.776531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.776732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.776765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.777099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.777131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.777404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.777436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.777675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.777706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.778022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.778054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.778338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.778372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.778674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.778706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.778946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.778977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.779254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.779287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.779554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.779586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.779811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.779843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.349 qpair failed and we were unable to recover it. 00:29:03.349 [2024-07-15 11:39:46.780051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.349 [2024-07-15 11:39:46.780083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.780286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.780319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.780603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.780634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.780875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.780907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.781145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.781177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.781331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.781364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.781660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.781691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.781892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.781922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.782194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.782237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.782511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.782543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.782761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.782792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.783062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.783095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.783370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.783403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.783710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.783741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.784057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.784089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.784382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.784416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.784652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.784683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.784986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.785017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.785305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.785338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.785556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.785587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.785733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.785764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.785981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.786018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.786297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.786331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.786534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.786565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.786800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.786831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.787122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.787154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.787441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.787474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.787772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.787804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.787973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.788005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.788217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.788257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.788530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.788561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.788731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.788763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.789029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.789061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.789379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.789412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.350 [2024-07-15 11:39:46.789672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.350 [2024-07-15 11:39:46.789704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.350 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.789929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.789961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.790196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.790235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.790457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.790489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.790788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.790819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.791124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.791155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.791425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.791457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.791672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.791703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.791913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.791945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.792221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.792264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.792568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.792600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.792875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.792907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.793037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.793068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.793383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.793416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.793710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.793742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.794011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.794042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.794344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.794378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.794535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.794566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.794813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.794844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.795158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.795189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.795474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.795507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.795805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.795835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.796070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.796101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.796436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.796468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.796719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.796749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.797068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.797100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.797325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.797357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.797568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.797605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.797872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.797903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.798038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.798068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.798367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.798400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.798626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.798658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.798925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.798956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.799276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.799311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.799609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.799641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.799859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.799891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.800187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.800218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.800452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.800484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.800644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.800677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.800898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.800930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.351 [2024-07-15 11:39:46.801247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.351 [2024-07-15 11:39:46.801280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.351 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.801589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.801622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.801849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.801881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.802078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.802110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.802399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.802433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.802657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.802690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.802987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.803019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.803151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.803183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.803407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.803440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.803727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.803758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.804064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.804095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.804391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.804424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.804714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.804746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.805042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.805074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.805300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.805333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.805654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.805686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.805923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.805955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.806115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.806147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.806385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.806418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.806688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.806719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.807038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.807361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.807393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.807687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.807719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.807925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.807957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.808252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.808285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.808569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.808600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.808767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.808799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.809030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.809067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.809291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.809325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.809625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.809657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.809968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.810001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.810285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.810319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.810543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.810575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.810861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.810893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.811152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.811183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.811500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.811533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.811808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.811840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.812140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.812171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.812487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.812520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.812744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.812776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.813065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.813097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.352 qpair failed and we were unable to recover it. 00:29:03.352 [2024-07-15 11:39:46.813415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.352 [2024-07-15 11:39:46.813448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.813707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.813738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.814008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.814040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.814278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.814311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.814607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.814639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.814841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.814873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.815119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.815151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.815446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.815480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.815772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.815803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.816070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.816102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.816335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.816663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.816695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.816984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.817016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.817265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.817299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.817617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.817649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.817965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.817996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.818140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.818172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.818339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.818372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.818663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.818989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.819020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.819248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.819281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.819572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.819604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.819893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.819925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.820217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.820258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.820552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.820584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.820869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.820901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.821120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.821157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.821455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.821487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.821711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.821742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.821968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.822000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.822213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.822268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.822496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.822528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.822819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.822850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.823147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.823179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.823424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.823457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.823745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.823777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.824069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.824101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.824313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.824345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.824644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.824676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.824966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.824998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.353 [2024-07-15 11:39:46.825298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.353 [2024-07-15 11:39:46.825331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.353 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.825569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.825602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.825745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.825776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.825991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.826022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.826220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.826262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.826476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.826508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.826779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.827068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.827101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.827392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.827424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.827649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.827680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.827966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.827998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.828292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.828324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.828621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.828653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.828944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.828976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.829297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.829329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.829555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.829586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.829880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.830184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.830215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.830527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.830560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.830834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.830866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.831133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.831165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.831457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.831489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.831782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.831814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.832104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.832136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.832428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.832461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.832681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.832712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.832979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.833020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.833268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.833535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.833566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.833841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.833872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.834174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.834205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.834439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.834471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.834706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.834738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.835030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.835061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.835275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.835307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.354 qpair failed and we were unable to recover it. 00:29:03.354 [2024-07-15 11:39:46.835445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.354 [2024-07-15 11:39:46.835477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.835765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.835795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.836063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.836096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.836385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.836418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.836577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.836608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.836836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.836868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.837164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.837196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.837486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.837518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.837740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.837771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.838082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.838113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.838326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.838359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.838632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.838662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.838869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.838902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.839107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.839139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.839358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.839391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.839590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.839621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.839888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.839920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.840239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.840273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.840548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.840580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.840879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.840910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.841124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.841155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.841450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.841484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.841766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.841798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.842040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.842071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.842277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.842310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.842533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.842564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.842767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.842799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.843101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.843132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.843367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.843400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.843601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.843633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.843833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.843865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.844063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.844100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.844369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.844401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.844675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.844707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.847502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.847537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.847770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.847800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.848094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.848126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.848446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.848479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.848696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.848727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.849016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.849048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.355 [2024-07-15 11:39:46.849299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.355 [2024-07-15 11:39:46.849331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.355 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.849566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.849596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.849798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.849830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.850146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.850176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.850381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.850413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.850698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.850731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.850942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.850973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.851199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.851237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.851511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.851753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.851785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.852000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.852031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.852344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.852376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.852642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.852674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.852882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.852915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.853204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.853247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.853531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.853563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.853857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.853887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.854103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.854134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.854408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.854441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.854736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.854768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.854914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.854945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.855095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.855126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.855337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.855370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.855638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.855669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.855960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.855992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.856291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.856323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.856538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.856569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.856728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.856760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.856965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.856997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.857212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.857253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.857528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.857560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.857835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.857872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.858172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.858204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.858511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.858544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.858761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.858793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.858933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.858964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.859244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.859277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.859551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.859583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.859799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.859830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.860099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.860131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.860450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.860483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.356 [2024-07-15 11:39:46.860775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.356 [2024-07-15 11:39:46.860806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.356 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.861042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.861073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.861369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.861401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.861689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.861720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.861932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.861965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.862184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.862215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.862484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.862517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.862756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.862787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.863080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.863112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.863338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.863371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.863534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.863565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.863775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.863807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.863958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.863990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.864220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.864262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.864506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.864538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.864833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.864865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.865086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.865117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.865416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.865450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.865736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.865768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.866066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.866098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.866393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.866426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.866632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.866662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.866963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.866995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.867294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.867327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.867615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.867942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.867973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.868264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.868296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.868532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.868563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.868780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.868812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.869103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.869135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.869382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.869420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.869739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.869770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.869977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.870009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.870255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.870288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.870580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.870611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.870905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.870937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.871255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.871287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.871571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.871602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.871816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.871846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.872133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.872164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.872415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.872448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.357 [2024-07-15 11:39:46.872649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.357 [2024-07-15 11:39:46.872680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.357 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.872973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.873004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.873293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.873327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.873624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.873656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.873968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.873999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.874271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.874304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.874604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.874634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.874804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.874835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.875068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.875101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.875367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.875401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.875642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.875673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.875966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.875996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.876292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.876324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.876616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.876648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.876915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.876947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.877262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.877295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.877513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.877545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.877832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.877864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.878073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.878104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.878381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.878415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.878729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.878760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.879032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.879063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.879281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.879314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.879582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.879614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.879887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.879917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.880179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.880211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.880527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.880558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.880855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.880886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.881175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.881206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.881432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.881464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.881764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.881796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.882014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.882311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.882344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.882614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.882645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.882942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.882974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.358 [2024-07-15 11:39:46.883266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.358 [2024-07-15 11:39:46.883299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.358 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.883568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.883600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.883807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.883839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.884038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.884070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.884272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.884304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.884592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.884943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.884974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.885279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.885312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.885593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.885625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.885844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.885876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.886023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.886055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.886198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.886241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.886531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.886564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.886793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.886825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.886998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.887030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.887317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.887349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.887483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.887515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.887739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.887771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.887913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.887944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.888214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.888256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.888576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.888607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.888902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.888938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.889246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.889279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.889590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.889622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.889823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.889856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.890056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.890088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.890255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.890288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.890506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.890680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.890712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.890982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.891015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.891244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.891276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.891570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.891602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.891905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.891937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.892088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.892120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.892268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.892300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.892600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.892631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.892917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.892948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.893264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.893297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.893513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.893545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.893839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.893871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.359 [2024-07-15 11:39:46.894143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.359 [2024-07-15 11:39:46.894174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.359 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.894432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.894466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.894734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.894767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.895059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.895090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.895386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.895639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.895671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.895872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.896202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.896247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.896546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.896579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.896803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.896835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.897047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.897079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.897396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.897429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.897572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.897603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.897884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.897915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.898186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.898531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.898563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.898780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.898811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.899053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.899094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.899320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.899353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.899567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.899599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.899832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.899863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.900066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.900105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.900424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.900462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.900677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.900710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.900995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.901028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.901327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.901359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.901591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.901624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.901784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.901818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.902036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.902068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.902335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.902370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.902533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.902566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.902758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.902830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.903068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.903308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.903341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.903634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.903667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.903845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.903878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.360 [2024-07-15 11:39:46.904172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.360 [2024-07-15 11:39:46.904206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.360 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.904477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.904510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.904740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.904771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.905063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.905097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.905268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.905329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.905559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.905591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.905814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.361 [2024-07-15 11:39:46.905846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.361 qpair failed and we were unable to recover it. 00:29:03.361 [2024-07-15 11:39:46.906131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.906164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.906462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.906498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.906649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.906680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.906884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.906916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.907221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.907264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.907493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.907525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.907802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.907836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.908073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.908375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.908411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.908580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.908613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.908780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.908846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.909065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.909098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.909348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.909384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.909607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.909823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.909859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.910062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.910121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.910369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.910427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.910645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.910678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.911020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.911060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.911368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.911402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.911618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.911649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.911853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.911885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.912178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.912209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.912439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.912473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.912760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.912792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.913077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.913108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.913328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.913367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.913573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.913605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.913891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.913927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.914240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.914274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.914552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.914586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.914884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.914917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.915068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.915338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.915371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.633 [2024-07-15 11:39:46.915516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.633 [2024-07-15 11:39:46.915548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.633 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.915827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.915858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.916151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.916183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.916332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.916364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.916657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.916689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.916977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.917009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.917275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.917309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.917524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.917555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.917853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.917884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.918152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.918184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.918504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.918537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.918795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.918828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.919059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.919090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.919383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.919416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.919621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.919652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.919948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.919979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.920189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.920220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.920501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.920534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.920775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.920806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.921017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.921049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.921254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.921288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.921472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.921504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.921803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.921836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.921995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.922027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.922323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.922363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.922567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.922598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.922728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.922759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.922994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.923026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.923172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.923203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.923450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.923483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.923752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.923784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.923927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.923961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.924256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.924289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.924560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.924592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.924809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.924841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.925128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.925159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.925390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.925423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.925650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.925681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.925907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.925939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.926160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.926192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.926496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.634 [2024-07-15 11:39:46.926529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.634 qpair failed and we were unable to recover it. 00:29:03.634 [2024-07-15 11:39:46.926768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.926803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.927128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.927159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.927372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.927404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.927672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.927704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.927957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.927990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.928290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.928323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.928594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.928626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.928793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.928825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.929144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.929176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.929419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.929455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.929762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.929794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.930089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.930120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.930275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.930309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.930508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.930542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.930767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.930803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.931075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.931108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.931385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.931418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.931660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.931691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.931913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.931945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.932215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.932263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.932573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.932607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.932888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.932921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.933219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.933540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.933576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.933780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.933812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.934091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.934125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.934402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.934436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.934602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.934634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.934883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.934914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.935191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.935223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.935393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.935425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.935690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.935721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.935993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.635 [2024-07-15 11:39:46.936025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.635 qpair failed and we were unable to recover it. 00:29:03.635 [2024-07-15 11:39:46.936261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.936295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.936508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.936540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.936686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.936718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.936949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.936981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.937277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.937311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.937619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.937652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.937851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.937883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.938152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.938184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.938431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.938464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.938731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.938762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.939089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.939121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.939341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.939375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.939672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.939703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.940001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.940032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.940321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.940353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.940654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.940687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.940837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.940869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.941143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.941175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.941397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.941430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.941742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.941773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.942067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.942098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.942317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.942350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.942689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.942722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.942956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.942988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.943221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.943267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.943559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.943591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.943743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.943775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.943923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.943955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.944223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.944283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.944504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.944537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.944782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.944819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.945119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.945150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.945371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.945405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.945614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.945646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.945908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.945939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.946236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.946269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.636 [2024-07-15 11:39:46.946561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.636 [2024-07-15 11:39:46.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.636 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.946889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.946922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.947123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.947155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.947351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.947384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.947657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.947690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.947993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.948024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.948221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.948263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.948489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.948522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.948808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.949110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.949143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.949462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.949496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.949719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.949753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.949973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.950004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.950273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.950305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.950527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.950558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.950779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.950810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.951049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.951081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.951303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.951336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.951567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.951599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.951799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.951831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.952099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.952132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.952335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.952368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.952660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.952690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.952891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.952957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.953252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.953287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.953490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.953523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.953741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.953774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.954060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.954095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.954386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.954419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.954656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.954689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.954935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.954968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.637 [2024-07-15 11:39:46.955114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.637 [2024-07-15 11:39:46.955146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.637 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.955394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.955427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.955644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.955676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.955823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.955860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.956072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.956104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.956401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.956434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.956650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.956681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.956913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.956945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.957216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.957273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.957429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.957460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.957608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.957840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.957871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.958167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.958198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.958483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.958515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.958733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.958764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.958906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.959247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.959280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.959580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.959613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.959914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.959947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.960240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.960273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.960485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.960518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.960831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.960864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.961084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.961115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.961358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.961391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.961708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.961740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.961971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.962002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.962156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.962187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.962445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.962477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.962638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.962670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.962884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.962916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.963139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.963172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.963485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.963519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.963732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.963763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.638 [2024-07-15 11:39:46.963979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.638 [2024-07-15 11:39:46.964011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.638 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.964214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.964258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.964529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.964561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.964781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.964813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.965060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.965092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.965304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.965337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.965536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.965567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.965858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.965890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.966206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.966248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.966502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.966533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.966756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.966794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.967011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.967043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.967244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.967278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.967550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.967582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.967814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.967846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.968120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.968152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.968318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.968605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.968637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.968915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.968946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.969145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.969176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.969468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.969502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.969752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.969784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.969955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.969987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.970213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.970259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.970547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.970579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.970795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.970827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.971097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.971130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.971363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.971396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.971630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.971662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.971955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.971986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.972203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.972246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.972518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.972550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.972765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.972797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.972953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.972985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.973200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.973245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.973386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.973418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.973733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.973765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.973978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.974056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.639 [2024-07-15 11:39:46.974312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.639 [2024-07-15 11:39:46.974353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.639 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.974629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.974662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.974879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.974911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.975129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.975161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.975365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.975397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.975618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.975651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.975918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.975951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.976195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.976237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.976506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.976538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.976827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.976860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.976984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.977016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.977168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.977200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.977431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.977462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.977686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.977718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.977887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.977918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.978114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.978145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.978436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.978737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.978770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.978932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.978964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.979190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.979222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.979431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.979463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.979732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.979762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.979911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.979941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.980213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.980254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.980449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.980636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.980668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.980868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.980906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.981178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.981210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.981431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.981464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.981752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.981784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.981991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.982023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.982302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.982334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.982606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.640 [2024-07-15 11:39:46.982638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.640 qpair failed and we were unable to recover it. 00:29:03.640 [2024-07-15 11:39:46.982943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.982974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.983185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.983217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.983523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.983556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.983763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.983794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.984087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.984431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.984465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.984678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.984709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.984983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.985016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.985259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.985292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.985578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.985610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.985874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.985905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.986129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.986160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.986453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.986486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.986650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.986681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.986877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.986909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.987202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.987246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.987474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.987507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.987773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.987804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.988005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.988036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.988327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.988360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.988574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.988618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.988819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.988850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.989053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.989238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.989270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.989467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.989499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.989653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.989685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.989970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.990202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.990244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.990484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.990515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.990730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.990762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.990972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.991003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.991210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.991260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.991462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.991495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.991646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.991677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.991898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.991931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.992242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.992275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.992488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.992520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.992726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.992758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.992969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.993000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.993152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.993183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.641 qpair failed and we were unable to recover it. 00:29:03.641 [2024-07-15 11:39:46.993409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.641 [2024-07-15 11:39:46.993442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.993727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.993759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.994046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.994078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.994292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.994324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.994533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.994565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.994725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.994755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.994976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.995008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.995162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.995201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.995371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.995403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.995598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.995630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.995783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.995814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.996011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.996042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.996311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.996344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.996566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.996598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.996759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.996791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.996996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.997028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.997295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.997328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.997538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.997570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.997732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.997764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.997973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.998005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.998244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.998277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.998424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.998457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.998699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.998733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.999488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.642 [2024-07-15 11:39:46.999526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.642 qpair failed and we were unable to recover it. 00:29:03.642 [2024-07-15 11:39:46.999724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:46.999756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:46.999999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.000031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.000258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.000291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.000482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.000514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.000651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.000682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.000910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.000941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.001167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.001198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.001423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.001455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.001600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.001632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.001919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.001950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.002100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.002131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.002342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.002374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.002595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.002626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.002837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.002869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.003095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.003126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.003256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.003289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.003481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.003512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.003726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.004020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.004051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.004251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.004283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.004507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.004538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.004690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.004722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.004916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.004947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.005142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.005173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.005312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.005345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.005496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.005527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.005795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.005827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.006077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.006109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.006381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.006414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.006614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.006645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.006912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.006943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.007102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.007135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.007333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.007366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.007570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.007602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.007745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.007777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.007986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.008017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.008251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.008284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.008429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.008462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.643 [2024-07-15 11:39:47.008743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.643 [2024-07-15 11:39:47.008776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.643 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.008977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.009008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.009205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.009246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.009440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.009471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.009689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.009720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.010003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.010034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.010254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.010286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.010501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.010533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.010692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.010725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.010950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.011202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.011246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.011518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.011549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.011678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.011850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.011887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.012009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.012040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.012260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.012293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.012504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.012537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.012731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.012763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.012904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.012937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.013149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.013181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.013412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.013445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.013592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.013623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.013837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.013869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.014075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.014106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.014318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.014349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.014562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.014613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.014821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.014853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.015079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.015109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.015422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.015456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.015603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.015635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.015838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.015871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.016132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.016164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.016319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.016355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.016553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.016585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.016811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.016843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.017049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.017083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.017293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.017329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.017471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.017503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.017705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.017736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.017887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.017917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.644 [2024-07-15 11:39:47.018127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.644 [2024-07-15 11:39:47.018164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.644 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.018367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.018401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.018667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.018698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.018841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.018873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.019136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.019167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.019380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.019412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.019561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.019593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.019784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.019815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.020099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.020130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.020342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.020374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.020524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.020556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.020807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.020838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.021034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.021066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.021337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.021369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.021590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.021622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.021829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.021861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.022959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.022990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.023280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.023312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.023457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.023489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.023689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.023719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.023862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.023894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.024094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.024126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.024265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.024501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.024533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.024706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.024737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.024864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.024896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.025100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.025132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.025355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.025387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.025543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.025574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.025728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.025759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.025885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.025916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.026049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.026080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.026220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.026259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.026467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.026498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.026709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.026739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.026934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.645 [2024-07-15 11:39:47.026967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.645 qpair failed and we were unable to recover it. 00:29:03.645 [2024-07-15 11:39:47.027130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.027162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.027433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.027465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.027594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.027625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.027885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.027915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.028042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.028072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.028256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.028288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.028578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.028609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.028758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.028788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.028925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.028956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.029108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.029338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.029499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.029664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.029819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.029970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.030000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.030199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.030238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.030435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.030466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.030738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.030769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.030966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.030997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.031222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.031260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.031405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.031435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.031626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.031657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.031782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.031813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.031956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.031987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.032190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.032222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.032369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.032400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.032514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.032545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.032758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.032794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.032936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.032967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.033181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.033212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.033366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.033397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.033546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.033576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.033857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.033887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.034019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.034050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.034258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.034291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.034443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.034474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.034615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.034645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.034869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.034899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.035110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.035141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.035344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.035375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.646 [2024-07-15 11:39:47.035563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.646 [2024-07-15 11:39:47.035595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.646 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.035744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.035776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.035903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.035934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.036138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.036169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.036322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.036354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.036488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.036518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.036661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.036691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.036884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.036915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.037115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.037145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.037409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.037442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.037585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.037615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.037878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.037909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.038196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.038234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.038379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.038411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.038619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.038655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.038782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.038812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.039036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.039066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.039194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.039236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.039447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.039478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.039736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.039767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.039957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.039988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.040207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.040245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.040438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.040469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.040662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.040692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.040817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.040848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.041037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.041067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.041200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.041241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.041434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.041464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.041620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.041651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.041810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.041841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.042035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.042065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.042208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.042249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.042392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.042423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.042560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.042590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.647 qpair failed and we were unable to recover it. 00:29:03.647 [2024-07-15 11:39:47.042769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-07-15 11:39:47.042800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.042992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.043023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.043172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.043203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.043425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.043456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.043672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.043703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.043924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.043954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.044163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.044193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.044356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.044394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.044532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.044563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.044686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.044716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.044979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.045010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.045202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.045455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.045485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.045631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.045662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.045850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.045881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.046018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.046049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.046200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.046241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.046368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.046399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.046522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.046552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.046839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.046871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.047127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.047157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.047322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.047355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.047499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.047530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.047762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.047792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.048077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.048107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.048366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.048399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.048594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.048624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.048827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.048858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.049048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.049078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.049232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.049265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.049401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.049433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.049645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.049676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.049814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.049845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.050044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.050075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.050356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.050388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.050533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.050564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.050764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.050795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.051000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.051031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.051236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.051267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.051523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.051554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.648 [2024-07-15 11:39:47.051749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.648 [2024-07-15 11:39:47.051780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.648 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.051909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.051940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.052088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.052118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.052321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.052353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.052557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.052588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.052723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.052754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.052963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.053138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.053169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.053473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.053505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.053693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.053723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.054928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.054958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.055196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.055253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.055380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.055412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.055600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.055630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.055837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.055867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.056025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.056055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.056245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.056277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.056471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.056503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.056717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.056748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.056873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.057107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.057137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.057256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.057289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.057429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.057459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.057744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.057774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.057961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.057991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.058165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.058195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.058408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.058439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.058630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.058659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.058912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.058942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.059095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.059125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.059408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.059444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.059633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.059664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.059796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.059827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.060017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.060048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.060178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.060209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.060358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.060388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.649 [2024-07-15 11:39:47.060526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-07-15 11:39:47.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.649 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.060839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.061033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.061064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.061301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.061334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.061474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.061504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.061695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.061726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.061933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.061964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.062105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.062136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.062330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.062363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.062642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.062673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.062955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.062985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.063182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.063213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.063420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.063706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.063866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.063897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.064167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.064198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.064485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.064516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.064798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.064829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.065037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.065067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.065196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.065236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.065427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.065458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.065644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.065679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.065805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.065835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.066113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.066143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.066262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.066293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.066513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.066544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.066769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.066799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.066954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.066985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.067114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.067144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.067372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.067403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.067608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.067638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.067760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.067791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.068047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.068077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.068233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.068264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.068404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.068615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.068646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.068898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.068929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.069064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.069094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.069375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.069408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.069554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.069585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.069706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.650 [2024-07-15 11:39:47.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.650 qpair failed and we were unable to recover it. 00:29:03.650 [2024-07-15 11:39:47.069876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.069906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.070095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.070126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.070265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.070297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.070503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.070534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.070657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.070687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.070884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.070914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.071066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.071250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.071434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.071606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.071759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.071982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.072012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.072244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.072287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.072471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.072653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.072684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.072908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.072938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.073130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.073160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.073362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.073393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.073618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.073648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.073908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.073938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.074091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.074319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.074351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.074493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.074523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.074643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.074673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.074875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.074906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.075219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.075258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.075384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.075414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.075665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.075696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.075818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.075849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.075996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.076027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.076218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.076255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.076534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.076566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.076753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.076784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.076986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.077017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.077138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.651 [2024-07-15 11:39:47.077168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.651 qpair failed and we were unable to recover it. 00:29:03.651 [2024-07-15 11:39:47.077383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.077415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.077537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.077568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.077757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.077787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.077904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.077934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.078072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.078103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.078291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.078322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.078530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.078561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.078783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.078814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.078948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.078977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.079191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.079222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.079393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.079423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.079614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.079644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.079779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.079810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.080083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.080123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.080327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.080359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.080581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.080611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.080759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.080789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.080976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.081006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.081195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.081232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.081487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.081517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.081721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.081751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.082016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.082046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.082240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.082272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.082423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.082454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.082678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.082708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.082975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.083006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.083272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.083303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.083529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.083560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.083837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.083868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.084141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.084172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.084365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.084397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.084686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.084717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.084857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.084887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.085093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.085403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.085434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.085631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.085661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.085938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.085969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.086190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.086220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.086417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.086448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.086576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.086606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.086819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.652 [2024-07-15 11:39:47.086854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.652 qpair failed and we were unable to recover it. 00:29:03.652 [2024-07-15 11:39:47.086980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.087010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.087279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.087312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.087447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.087478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.087777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.087806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.088077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.088107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.088330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.088361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.088550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.088580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.088728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.088759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.088904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.088935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.089129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.089159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.089379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.089410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.089671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.089701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.090002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.090032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.090234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.090267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.090549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.090580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.090834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.090864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.091076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.091107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.091246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.091277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.091529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.091560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.091762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.091793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.092015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.092045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.092249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.092281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.092473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.092503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.092705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.092735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.093008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.093039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.093235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.093266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.093413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.093449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.093720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.093750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.093885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.093914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.094100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.094131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.094264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.094296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.094443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.094473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.094751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.094781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.095032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.095062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.095210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.095269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.095407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.095454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.095633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.095663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.095803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.096044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.096074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.096275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.096306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.096516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.096547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.653 [2024-07-15 11:39:47.096738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.653 [2024-07-15 11:39:47.096768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.653 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.096966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.096995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.097145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.097174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.097459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.097490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.097695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.097930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.097961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.098110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.098140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.098294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.098325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.098501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.098531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.098724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.098978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.099009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.099214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.099255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.099542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.099572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.099701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.099731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.099872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.099903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.100110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.100141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.100261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.100292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.100500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.100529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.100735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.100765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.101063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.101093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.101348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.101379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.101577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.101607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.101857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.101887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.102160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.102190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.102474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.102657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.102688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.102918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.102987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.103247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.103283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.103487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.103518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.103729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.103760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.103950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.103980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.104169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.104200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.104347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.104381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.104585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.104615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.104764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.104794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.104942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.104973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.105099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.105128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.105249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.105281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.105582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.105613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.105832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.105862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.106022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.654 [2024-07-15 11:39:47.106053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.654 qpair failed and we were unable to recover it. 00:29:03.654 [2024-07-15 11:39:47.106200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.106237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.106358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.106388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.106521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.106551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.106828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.106857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.107044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.107074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.107264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.107294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.107481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.107710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.107740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.107888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.107918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.108105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.108135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.108363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.108394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.108670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.108701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.108928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.108959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.109233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.109264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.109454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.109484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.109611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.109642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.109842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.109872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.110059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.110089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.110292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.110323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.110459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.110489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.110761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.110791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.110922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.110951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.111139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.111169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.111470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.111501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.111692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.111723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.111954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.111983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.112265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.112296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.112434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.112464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.112659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.112689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.112937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.112966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.113165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.113194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.113467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.113499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.113697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.114022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.114053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.114325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.114357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.114478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.655 [2024-07-15 11:39:47.114508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.655 qpair failed and we were unable to recover it. 00:29:03.655 [2024-07-15 11:39:47.114659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.114690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.114814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.115031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.115258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.115293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.115503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.115533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.115758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.115788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.115936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.115965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.116187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.116217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.116346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.116377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.116514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.116543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.116660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.116690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.116949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.116978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.117164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.117193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.117331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.117363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.117616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.117646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.117832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.117862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.118045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.118075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.118381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.118413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.118692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.118721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.118973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.119003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.119280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.119311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.119513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.119544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.119681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.119711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.119918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.119948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.120153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.120183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.120379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.120410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.120665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.120696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.120902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.120932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.121158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.121189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.121399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.121431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.121628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.121663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.121867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.121898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.122109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.122139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.122331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.122362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.122487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.122517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.122714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.122744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.122942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.122973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.123189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.123220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.123369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.123399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.656 [2024-07-15 11:39:47.123542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.656 [2024-07-15 11:39:47.123572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.656 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.123760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.123791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.123980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.124009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.124238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.124269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.124485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.124515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.124796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.124826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.125076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.125106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.125308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.125339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.125527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.125557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.125686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.125716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.125921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.125950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.126171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.126201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.126404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.126436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.126690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.126720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.126974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.127005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.127285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.127317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.127514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.127544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.127746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.127776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.127912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.127942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.128149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.128179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.128393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.128425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.128556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.128586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.128848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.128878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.129072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.129102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.129247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.129277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.129481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.129511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.129710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.129740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.129874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.129904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.130095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.130125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.130321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.130352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.130540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.130570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.130714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.130745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.131092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.131161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.131484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.131520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.131805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.131837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.132095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.132266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.132299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.132504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.132535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.132792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.132822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.133023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.133054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.133257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.133288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.657 qpair failed and we were unable to recover it. 00:29:03.657 [2024-07-15 11:39:47.133479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.657 [2024-07-15 11:39:47.133510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.133652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.133683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.133811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.133841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.134025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.134056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.134344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.134385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.134530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.134560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.134844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.134875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.135056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.135089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.135281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.135312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.135502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.135533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.135738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.135768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.135984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.136014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.136163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.136193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.136427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.136458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.136638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.136668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.136951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.136982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.137116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.137146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.137293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.137325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.137516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.137546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.137799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.137829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.137956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.137987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.138173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.138203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.138367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.138398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.138606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.138636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.138831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.138861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.138998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.139029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.139240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.139272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.139461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.139492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.139744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.139774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.139921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.139951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.140097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.140127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.140329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.140361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.140502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.140532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.140791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.140822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.141083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.141113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.141439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.141470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.141665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.141695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.141820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.141851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.141986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.142017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.142218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.142256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.142378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.658 [2024-07-15 11:39:47.142409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.658 qpair failed and we were unable to recover it. 00:29:03.658 [2024-07-15 11:39:47.142618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.142649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.142792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.142823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.143011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.143041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.143247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.143284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.143488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.143519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.143649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.143680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.143868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.143899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.144022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.144242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.144274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.144408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.144437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.144574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.144604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.144799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.144829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.145115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.145145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.145363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.145394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.145535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.145565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.145723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.145754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.145954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.145985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.146260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.146291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.146492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.146523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.146642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.146672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.146970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.147000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.147142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.147171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.147384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.147416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.147691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.147723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.147940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.147970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.148156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.148186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.148358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.148390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.148592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.148622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.148798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.148829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.149023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.149052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.149261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.149294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.149482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.149512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.149696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.149726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.149847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.149877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.150179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.150335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.150366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.150554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.150585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.150772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.150802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.151005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.151036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.151314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.151346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.659 [2024-07-15 11:39:47.151589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.659 [2024-07-15 11:39:47.151618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.659 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.151828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.151857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.152153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.152183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.152378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.152414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.152608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.152639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.152882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.152913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.153100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.153131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.153408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.153439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.153642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.153673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.153896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.153926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.154139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.154169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.154379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.154411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.154611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.154641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.154763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.154793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.155048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.155078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.155276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.155308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.155454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.155485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.155748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.155779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.155967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.155998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.156120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.156151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.156426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.156458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.156593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.156624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.156815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.156847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.157098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.157129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.157267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.157299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.157416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.157446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.157661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.157692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.157881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.157912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.158166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.158196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.158423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.158454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.158762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.158997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.159028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.159309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.159340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.159475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.159506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.159707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.159737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.159957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.660 [2024-07-15 11:39:47.159987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.660 qpair failed and we were unable to recover it. 00:29:03.660 [2024-07-15 11:39:47.160140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.160170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.160305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.160337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.160596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.160626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.160747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.160778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.160909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.161129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.161159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.161307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.161339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.161493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.161523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.161755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.161786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.162004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.162035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.162163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.162193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.162334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.162366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.162556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.162587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.162830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.162860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.163062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.163093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.163251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.163283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.163417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.163448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.163576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.163606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.163830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.163861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.164060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.164090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.164215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.164265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.164473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.164504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.164642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.164672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.164879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.164910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.165101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.165132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.165321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.165353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.165491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.165522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.165646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.165676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.165888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.165918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.166061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.166092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.166205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.166245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.166435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.166465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.166669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.166700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.166828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.166859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.167049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.167084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.167281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.167312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.167524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.167554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.167778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.167808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.167944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.167974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.168198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.168234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.661 [2024-07-15 11:39:47.168421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.661 [2024-07-15 11:39:47.168452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.661 qpair failed and we were unable to recover it. 00:29:03.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 765452 Killed "${NVMF_APP[@]}" "$@" 00:29:03.662 [2024-07-15 11:39:47.168728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.168759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.168949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.168979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.169194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.169235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:03.662 [2024-07-15 11:39:47.169514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.169545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:03.662 [2024-07-15 11:39:47.169732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.169763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:03.662 [2024-07-15 11:39:47.169993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.170024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.170214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.170469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.170501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.662 [2024-07-15 11:39:47.170756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.170785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.170988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.171018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.171145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.171175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.171459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.171494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.171716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.171746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.172020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.172049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.172265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.172298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.172442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.172471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.172591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.172620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.172891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.172925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.173176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.173206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.173470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.173501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.173700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.173728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.173929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.173959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.174090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.174120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.174273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.174302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.174450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.174478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.174660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.174692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.174916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.174945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.175128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.175157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.175350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.175380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.175507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.175536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.175726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.175755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.175951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.175981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.176107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.176137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.176360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.176390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.176609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.176638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.176828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.176858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=766303 00:29:03.662 [2024-07-15 11:39:47.177039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.177069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.662 [2024-07-15 11:39:47.177206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.662 [2024-07-15 11:39:47.177243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.662 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 766303 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:03.663 [2024-07-15 11:39:47.177495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.177524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 766303 ']' 00:29:03.663 [2024-07-15 11:39:47.177795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.177825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.663 [2024-07-15 11:39:47.178080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.178110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:03.663 [2024-07-15 11:39:47.178337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.178372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.178492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.178522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.663 [2024-07-15 11:39:47.178772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.178802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.179018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.179048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 11:39:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.663 [2024-07-15 11:39:47.179201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.179241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.179431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.179461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.179613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.179643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.179839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.179868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.179990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.180020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.180237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.180267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.180472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.180501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.180643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.180672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.180885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.180915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.181070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.181101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.181308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.181338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.181534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.181565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.181769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.181799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.181925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.181955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.182079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.182109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.182270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.182300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.182522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.182552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.182740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.182770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.182961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.182990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.183119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.183148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.183354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.183385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.183503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.183538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.183733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.183762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.183959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.183988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.184122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.184151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.184297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.184328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.184537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.184566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.184701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.184731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.184942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.184972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.663 qpair failed and we were unable to recover it. 00:29:03.663 [2024-07-15 11:39:47.185109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.663 [2024-07-15 11:39:47.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.185271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.185303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.185605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.185635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.185834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.185863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.186009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.186039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.186259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.186288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.186592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.186621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.186815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.186844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.187041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.187070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.187316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.187346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.187545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.187574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.187852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.187881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.188002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.188032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.188291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.188320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.188454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.188485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.188632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.188661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.188883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.188912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.189119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.189148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.189342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.189372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.189522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.189551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.189752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.189782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.189993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.190024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.190249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.190279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.190428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.190457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.190665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.190694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.190820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.190850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.190985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.191014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.191148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.191177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.191362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.191392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.191526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.191555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.191833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.191862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.191990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.192019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.192203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.192246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.192383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.192412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.192542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.192571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.664 [2024-07-15 11:39:47.192827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.664 [2024-07-15 11:39:47.192857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.664 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.192983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.193012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.193137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.193166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.193433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.193464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.193600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.193630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.193881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.193909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.194027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.194056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.194274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.194305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.194601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.194630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.194922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.194951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.195089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.195119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.195253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.195283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.195488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.195517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.195665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.195694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.195851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.195881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.196064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.196093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.196291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.196320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.196471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.196501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.196638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.196854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.196890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.197079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.197109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.197299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.197328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.197460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.197489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.197627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.197656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.197865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.197895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.198156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.198186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.198330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.198360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.198495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.198524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.198661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.198827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.198856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.199052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.199080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.199219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.199272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.199554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.199587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.199801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.200056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.200085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.201627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.201977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.202009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.202210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.202258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.202403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.202434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.202641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.665 [2024-07-15 11:39:47.202670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.665 qpair failed and we were unable to recover it. 00:29:03.665 [2024-07-15 11:39:47.202926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.202956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.203089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.203119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.203293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.203323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.203445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.203475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.203662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.203691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.203897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.203926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.204124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.204153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.204292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.204323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.204526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.204556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.204716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.204745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.204881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.204911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.205046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.205076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.205284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.205315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.205505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.205534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.205665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.205695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.205971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.206107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.206352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.206575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.206790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.206929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.206958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.207174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.207204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.207417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.207450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.207581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.207611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.207805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.207873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.208093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.208128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.208319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.208352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.208624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.208796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.208826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.208963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.208993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.209151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.209307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.209475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.209622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.209856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.209999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.210029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.210163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.210193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.210360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.210399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.210527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.210557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.666 [2024-07-15 11:39:47.210682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.666 [2024-07-15 11:39:47.210713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.666 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.210853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.210883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.211943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.211972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.212088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.212117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.212249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.212281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.212480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.212510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.212721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.212750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.214968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.215026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.215263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.215297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.215563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.215593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.215803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.215832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.215960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.215990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.216189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.216218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.216440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.216470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.216577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.216606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.216876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.216906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.217115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.217144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.217267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.217297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.217442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.217471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.217601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.217631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.217890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.217959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.218159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.218237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.218386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.218425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.218552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.218582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.218689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.218718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.218922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.218952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.219091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.219121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.219318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.219348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.219488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.219518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.219637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.219667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.219922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.219951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.220090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.220121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.220293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.220324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.220518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.220554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.949 [2024-07-15 11:39:47.220746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.949 [2024-07-15 11:39:47.220777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.949 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.220927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.220957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.221933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.221963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.222242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.222272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.222419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.222449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.222678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.222708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.222898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.222928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.223141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.223170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.223337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.223368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.223486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.223517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.223647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.223676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.223864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.223893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.224248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224259] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:29:03.950 [2024-07-15 11:39:47.224279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 [2024-07-15 11:39:47.224299] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:29:03.950 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.950 [2024-07-15 11:39:47.224417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.224565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.224714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.224937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.224966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.225102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.225132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.225255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.225284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.225547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.225576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.225707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.225736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.225912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.226097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.226297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.226328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.226446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.226477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.226733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.226762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.226972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.227150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.227326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.227503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.227683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.227842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.227871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.228018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.228256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.950 [2024-07-15 11:39:47.228287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.950 qpair failed and we were unable to recover it. 00:29:03.950 [2024-07-15 11:39:47.228491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.228521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.228709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.228738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.228940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.228970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.229116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.229146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.229340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.229372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.229484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.229514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.229710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.229740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.229866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.229895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.230968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.230997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.231183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.231212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.231358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.231388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.231529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.231558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.231694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.231724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.231943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.231973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.232168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.232197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.232369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.232407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.232609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.232639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.232778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.232807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.232990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.233144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.233333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.233522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.233705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.233839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.233869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.234105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.234293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.234468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.234835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.234992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.235021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.235147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.235176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.235297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.235327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.235604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.235634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.235830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.235868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.236008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.236038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.951 [2024-07-15 11:39:47.236163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.951 [2024-07-15 11:39:47.236192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.951 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.236331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.236361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.236611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.236639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.236762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.236790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.236987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.237017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.237212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.237249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.237358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.237388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.237515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.237544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.237768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.237797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.238000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.238244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.238274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.238467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.238495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.238702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.238731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.238927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.238956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.239110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.239139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.239290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.239321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.239509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.239539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.239680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.239709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.239901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.239929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.240058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.240089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.240257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.240287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.240480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.240508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.240636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.240665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.240915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.240944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.241082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.241111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.241273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.241307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.241444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.241473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.241669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.241698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.241954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.241983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.242193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.242222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.242378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.242408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.242724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.242753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.242899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.242928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.243043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.243072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.243325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.243355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.243486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.243515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.243704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.243733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.243859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.243888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.244033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.244067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.244343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.952 [2024-07-15 11:39:47.244373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.952 qpair failed and we were unable to recover it. 00:29:03.952 [2024-07-15 11:39:47.244516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.244546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.244680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.244710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.244893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.244923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.245050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.245079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.245287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.245318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.245507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.245536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.245734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.245763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.245959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.245988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.246173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.246202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.246406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.246437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.246649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.246682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.246862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.246894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.247086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.247119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.247254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.247285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.247472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.247628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.247658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.247845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.247875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.248002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.248031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.248158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.248188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.248337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.248368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.248630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.248660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.248887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.248916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.249101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.249131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.249267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.249296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.249492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.249727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.249765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.249925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.249956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.250093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.250123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.250256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.250287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.250505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.250534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.250662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.250692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.250814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.953 [2024-07-15 11:39:47.250844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.953 qpair failed and we were unable to recover it. 00:29:03.953 [2024-07-15 11:39:47.251043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.251072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.251263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.251293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.251442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.251471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.251596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.954 [2024-07-15 11:39:47.251626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.251763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.251980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.252009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.252130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.252166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.252364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.252395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.252649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.252679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.252878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.252907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.253957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.253985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.254172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.254202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.254347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.254377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.254508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.254537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.254793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.254822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.254947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.254976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.255083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.255364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.255394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.255496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.255526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.255649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.255879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.255907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.256122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.256341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.256370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.256552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.256581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.256803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.256834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.256958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.256986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.257126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.257337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.257366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.257571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.257601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.257740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.257769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.257965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.257994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.258167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.258196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.258343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.258373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.258572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.258602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.258754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.258783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.258922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.954 [2024-07-15 11:39:47.258951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.954 qpair failed and we were unable to recover it. 00:29:03.954 [2024-07-15 11:39:47.259071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.259099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.259300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.259331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.259584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.259613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.259767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.259796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.259940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.259969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.260154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.260189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.260392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.260422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.261808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.261858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.262150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.262182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.262386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.262417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.262620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.262649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.262929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.262958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.263155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.263184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.263483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.263514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.263665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.263695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.263949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.263978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.264165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.264194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.264487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.264522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.264658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.264687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.264824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.264856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.265058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.265087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.265348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.265380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.265512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.265541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.265738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.265766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.265904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.265933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.266966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.266995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.267194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.267223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.267424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.267454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.267583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.267810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.267840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.267967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.267996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.268141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.268170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.268393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.268422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.268547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.268576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.955 [2024-07-15 11:39:47.268807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.955 qpair failed and we were unable to recover it. 00:29:03.955 [2024-07-15 11:39:47.268944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.268973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.269094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.269123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.269342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.269373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.269523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.269552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.269757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.269787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.269972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.270006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.270209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.270245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.270450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.270479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.270750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.270779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.270979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.271009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.271126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.271154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.271373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.271403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.271604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.271767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.271796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.271995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.272024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.272188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.272216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.272420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.272449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.272648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.272678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.272886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.272915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.273045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.273076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.273265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.273295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.273553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.273583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.273782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.273810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.274950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.274979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.275362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.275518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.275690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.275854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.275991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.276143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.276321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.276483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.276697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.276863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.276892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.956 [2024-07-15 11:39:47.277113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.956 [2024-07-15 11:39:47.277142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.956 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.277344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.277374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.277522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.277552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.277683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.277712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.277833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.277862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.278057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.278092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.278218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.278255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.278451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.278480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.278669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.278698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.278836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.278866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.279957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.279985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.280147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.280175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.280376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.280407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.280595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.280624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.280819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.280849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.281039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.281255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.281284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.281422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.281450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.281647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.281676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.281937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.281966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.282126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.282291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.282438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.282801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.282998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.283028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.283240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.283270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.283417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.283446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.283629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.283659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.283777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.283805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.957 qpair failed and we were unable to recover it. 00:29:03.957 [2024-07-15 11:39:47.284002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.957 [2024-07-15 11:39:47.284031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.284158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.284188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.284483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.284514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.284700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.284729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.284872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.284899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.285076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.285221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.285464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.285635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.285802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.285992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.286026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.286221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.286261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.286383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.286412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.286652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.286682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.286883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.286911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.287058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.287220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.287398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.287571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.287839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.287990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.288203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.288388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.288566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.288721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.288871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.288899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.289086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.289115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.289239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.289270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.289397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.289426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.289610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.289638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.289785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.289815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.290011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.290042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.290158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.290186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.290317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.290346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.290540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.290771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.290800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.291036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.291205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.291521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.291701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.958 [2024-07-15 11:39:47.291872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.958 qpair failed and we were unable to recover it. 00:29:03.958 [2024-07-15 11:39:47.291991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.292021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.292153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.292192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.292421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.292454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.292597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.292626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.292825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.292854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.293062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.293092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.293302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.293333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.293595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.293625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.293760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.293790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.294053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.294282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.294491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.294669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.294847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.294966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.959 [2024-07-15 11:39:47.294990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.295154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.295323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.295547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.295710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.295872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.296907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.296937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.297196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.297234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.297363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.297394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.297531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.297561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.297763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.297791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.297929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.297963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.298091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.298121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.298273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.298303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.298504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.298535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.298665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.298695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.298823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.298853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.299001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.299031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.299238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.299270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.299401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.299442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.299641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.299672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.959 [2024-07-15 11:39:47.299953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.959 [2024-07-15 11:39:47.299983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.959 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.300936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.300967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.301084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.301115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.301261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.301291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.301527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.301594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.301740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.301775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.301902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.301931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.302070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.302098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.302221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.302260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.302453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.302483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.302765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.302795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.302919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.302947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.303062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.303090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.303363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.303394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.303521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.303550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.303689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.303718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.303912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.303940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.304071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.304112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.304334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.304364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.304555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.304585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.304757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.304786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.305891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.305920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.306126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.306155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.306371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.306403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.306592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.306623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.306816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.306845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.306979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.307010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.307213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.307254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.307510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.307539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.307723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.307752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.307861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.307890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.960 [2024-07-15 11:39:47.308074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.960 [2024-07-15 11:39:47.308102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.960 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.308286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.308317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.308508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.308537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.308683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.308712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.308921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.308951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.309088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.309118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.309257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.309287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.309413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.309441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.309632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.309674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.309897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.309928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.310174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.310204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.310440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.310472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.310684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.310715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.310910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.310939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.311149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.311179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.311315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.311346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.311519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.311549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.311827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.311856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.312056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.312086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.312250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.312281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.312473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.312503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.312628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.312658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.312815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.312846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.313101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.313131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.313254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.313285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.313401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.313431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.313572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.313601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.313785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.313814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.314019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.314049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.314240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.314271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.314453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.314483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.314600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.314630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.314883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.314913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.315040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.315069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.315347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.315377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.315528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.315564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.315706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.961 [2024-07-15 11:39:47.315736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-07-15 11:39:47.316038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.316197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.316380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.316543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.316774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.316942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.316971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.317136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.317313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.317469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.317624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.317843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.317994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.318024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.318215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.318259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.318512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.318542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.318753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.318783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.318979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.319008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.319192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.319221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.319429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.319459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.319584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.319614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.319792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.319822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.320013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.320160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.320190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.320455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.320484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.320630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.320660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.320922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.320952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.321211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.321257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.321471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.321501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.321653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.321683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.321829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.321859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.322050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.322081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.322420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.322450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.322641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.322671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.322898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.322927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.323056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.323086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.323214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.323253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.323474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.323503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.323780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.323810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.323945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.323975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-07-15 11:39:47.324128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.962 [2024-07-15 11:39:47.324158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.324418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.324447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.324651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.324681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.324824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.324853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.325108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.325361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.325393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.325532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.325562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.325692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.325722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.325863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.325893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.326032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.326063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.326250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.326279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.326405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.326435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.326628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.326659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.326853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.326883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.327038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.327069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.327350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.327381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.327525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.327555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.327810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.327839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.328035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.328065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.328195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.328235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.328487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.328517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.328658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.328688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.328877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.328907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.329045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.329076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.329223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.329265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.329449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.329479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.329618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.329648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.329811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.329846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.330067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.330098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.330308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.330339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.330548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.330578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.330836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.330866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.331071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.331102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.331306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.331338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.331629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.331661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.331864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.331896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.332042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.332073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.332263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.332296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.332564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.332597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.332755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.332788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.332999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.333037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-07-15 11:39:47.333185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.963 [2024-07-15 11:39:47.333216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.333363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.333395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.333599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.333636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.333782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.333813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.333931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.333963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.334132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.334290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.334453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.334688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.334870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.334994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.335028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.335151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.335452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.335485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.335616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.335647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.335853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.335884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.336875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.336906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.337898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.337932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.338098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.338276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.338449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.338640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.338862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.338998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.339029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.339173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.339204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.339352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.339382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.339597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.339628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.339762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.339794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.339994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.340025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.340214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.340256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.340395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.340432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.340574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.340604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.964 [2024-07-15 11:39:47.340757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.964 [2024-07-15 11:39:47.340787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.964 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.340896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.340925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.341082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.341330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.341563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.341887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.341997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.342027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.342145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.342313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.342343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.342539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.342570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.342776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.342807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.343825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.343854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.344965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.344994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.345144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.345174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.345398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.345448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.345574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.345604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.345728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.345758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.345962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.345993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.346189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.346219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.346370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.346401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.346517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.346547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.346664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.346694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.346893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.346922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.347899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.347929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.348130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.348159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.965 qpair failed and we were unable to recover it. 00:29:03.965 [2024-07-15 11:39:47.348278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.965 [2024-07-15 11:39:47.348309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.348501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.348530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.348646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.348675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.348808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.348838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.348978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.349261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.349488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.349631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.349805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.349961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.349991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.350118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.350147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.350322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.350351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.350539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.350568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.350755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.350784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.350984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.351146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.351415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.351575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.351733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.351886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.351916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.352949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.352978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.353116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.353144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.353257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.353288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.353508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.353537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.353845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.354124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.354153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.354278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.354308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.354441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.354470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.354602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.966 [2024-07-15 11:39:47.354631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.966 qpair failed and we were unable to recover it. 00:29:03.966 [2024-07-15 11:39:47.354814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.354843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.355026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.355056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.355245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.355275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.355528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.355557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.355763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.355792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.356068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.356097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.356316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.356346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.356474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.356503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.356762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.356792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.356912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.356941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.357058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.357087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.357337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.357368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.357517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.357546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.357690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.357950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.357979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.358089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.358117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.358256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.358286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.358495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.358524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.358665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.358694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.358900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.358929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.359130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.359159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.359284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.359315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.359566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.359595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.359780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.359809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.359996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.360025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.360147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.360176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.360368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.360596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.360625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.360761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.360790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.360974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.361004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.361203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.361249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.361390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.361419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.361627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.361656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.361838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.361868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.362053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.362082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.362212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.362394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.362423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.362701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.362730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.362972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.363001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.967 [2024-07-15 11:39:47.363199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.967 [2024-07-15 11:39:47.363235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.967 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.363452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.363481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.363705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.363735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.363917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.363946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.364246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.364280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.364503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.364532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.364725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.364754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.364887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.364916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.365032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.365060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.365247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.365277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.365466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.365495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.365628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.365657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.365859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.365890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.366013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.366043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.366253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.366282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.366503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.366532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.366682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.366829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.366858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.367090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.367119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.367292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.367323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.367518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.367547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.367658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.367688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.367947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.367977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.368180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.368211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.368372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.368404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.368552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.368582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.368805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.368836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.369071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.369299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.369438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.369628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.968 [2024-07-15 11:39:47.369654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.968 [2024-07-15 11:39:47.369654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 [2024-07-15 11:39:47.369664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.968 [2024-07-15 11:39:47.369672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.968 [2024-07-15 11:39:47.369781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.369809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.369809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:03.968 [2024-07-15 11:39:47.370018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.370047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.370178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.370211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.370160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:03.968 [2024-07-15 11:39:47.370261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:03.968 [2024-07-15 11:39:47.370263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:03.968 [2024-07-15 11:39:47.370412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.370443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.370629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.370658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.370913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.370943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.371213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.968 [2024-07-15 11:39:47.371251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.968 qpair failed and we were unable to recover it. 00:29:03.968 [2024-07-15 11:39:47.371394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.371423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.371538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.371567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.371764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.371793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.371928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.371963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.372167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.372197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.372348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.372385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.372644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.372673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.372874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.373102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.373132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.373414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.373445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.373661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.373692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.373836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.373867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.374082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.374112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.374313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.374344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.374480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.374510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.374628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.374657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.374915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.374945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.375105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.375134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.375339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.375371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.375505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.375786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.375816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.376004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.376034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.376245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.376277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.376465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.376495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.376682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.376712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.376917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.376947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.377203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.377240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.377507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.377831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.377862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.378069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.378098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.378326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.378376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.378658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.378688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.378806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.378835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.379034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.379064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.379311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.379342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.379555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.379585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.379848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.379878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.380015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.380044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.380177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.380207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.380417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.380447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.380574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.969 [2024-07-15 11:39:47.380603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.969 qpair failed and we were unable to recover it. 00:29:03.969 [2024-07-15 11:39:47.380722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.380752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.380872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.380901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.381176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.381204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.381355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.381385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.381577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.381607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.381885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.381915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.382038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.382068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.382189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.382219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.382431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.382462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.382716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.382746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.382971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.383000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.383203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.383467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.383497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.383638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.383668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.383858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.383887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.384039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.384070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.384202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.384350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.384546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.384576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.384779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.384808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.384930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.384960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.385211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.385254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.385472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.385502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.385730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.385760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.385962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.385992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.386184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.386213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.386409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.386440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.386670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.386699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.386893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.386922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.387146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.387176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.387389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.387420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.387637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.387668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.387868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.387898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.388042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.970 [2024-07-15 11:39:47.388072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.970 qpair failed and we were unable to recover it. 00:29:03.970 [2024-07-15 11:39:47.388236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.388267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.388540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.388571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.388775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.388804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.388936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.389101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.389130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.389325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.389357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.389551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.389580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.389713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.389742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.389918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.389948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.390138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.390168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.390301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.390338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.390611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.390642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.390827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.390857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.390975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.391197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.391364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.391643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.391806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.391956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.391986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.392260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.392292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.392525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.392556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.392822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.392853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.393010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.393041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.393240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.393271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.393465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.393497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.393684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.393716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.393968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.393998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.394140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.394173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.394399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.394432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.394628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.394658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.394934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.394966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.395236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.395269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.395380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.395412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.395616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.395647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.395797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.395828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.395979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.396008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.396195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.396243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.396450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.396496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.396703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.396732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.396927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.396958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.397142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.397173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.971 [2024-07-15 11:39:47.397416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.971 [2024-07-15 11:39:47.397448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.971 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.397637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.397669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.397857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.397887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.398121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.398152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.398345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.398378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.398605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.398634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.398886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.398917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.399192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.399223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.399430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.399462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.399652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.399682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.399905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.399935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.400144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.400175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.400318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.400348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.400531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.400811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.400842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.401096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.401126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.401274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.401305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.401572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.401602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.401733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.401763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.401891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.401920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.402104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.402134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.402330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.402361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.402579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.402609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.402798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.402828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.403031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.403061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.403328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.403359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.403503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.403532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.403730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.403761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.403898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.403929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.404134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.404164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.404309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.404339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.404616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.404645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.404838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.404867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.405050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.405079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.405303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.405570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.405600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.405870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.405899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.406142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.406204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.406438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.406481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.406741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.406770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.406955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.406983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.407263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.972 [2024-07-15 11:39:47.407293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.972 qpair failed and we were unable to recover it. 00:29:03.972 [2024-07-15 11:39:47.407547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.407576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.407783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.407813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.408083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.408113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.408304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.408334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.408539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.408568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.408842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.408872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.408989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.409018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.409267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.409297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.409431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.409468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.409742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.409994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.410023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.410158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.410187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.410448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.410478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.410756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.410787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.410985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.411015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.411286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.411316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.411533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.411562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.411681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.411710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.411831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.411860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.412060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.412089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.412205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.412245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.412390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.412420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.412677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.412706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.412850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.412879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.413082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.413112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.413296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.413327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.413530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.413711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.413740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.413929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.413958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.414155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.414184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.414321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.414351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.414550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.414579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.414787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.414817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.415010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.415038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.415244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.415273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.415508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.415548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.415739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.415769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.416045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.416074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.416286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.416316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.416508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.973 [2024-07-15 11:39:47.416790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.973 [2024-07-15 11:39:47.416819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.973 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.416954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.416984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.417186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.417216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.417377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.417625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.417655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.417874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.417904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.418131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.418160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.418291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.418322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.418525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.418554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.418758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.418788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.419040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.419070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.419322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.419658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.419688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.419838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.419867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.420063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.420093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.420293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.420323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.420526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.420802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.420833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.421085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.421115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.421321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.421353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.421629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.421659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.421854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.421883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.422164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.422200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.422418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.422448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.422638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.422855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.422884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.423157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.423188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.423385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.423416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.423604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.423634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.423770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.423801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.424008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.424039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.424292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.424324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.424459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.424488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.424692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.424722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.424857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.424887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.425088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.974 [2024-07-15 11:39:47.425118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.974 qpair failed and we were unable to recover it. 00:29:03.974 [2024-07-15 11:39:47.425404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.425435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.425575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.425606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.425734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.425764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.425991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.426020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.426153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.426182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.426408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.426439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.426688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.426716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.426990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.427019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.427223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.427262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.427540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.427570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.427706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.427735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.427985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.428014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.428154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.428184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.428391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.428580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.428610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.428824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.429041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.429070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.429291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.429322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.429513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.429543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.429719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.429748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.429872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.429901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.430100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.430129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.430321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.430351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.430485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.430515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.430697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.430726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.430933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.430962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.431147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.431177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.431376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.431407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.431661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.431690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.431821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.431850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.432105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.432134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.432322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.432353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.432539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.432567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.432819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.432849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.433037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.433067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.433219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.433274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.433462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.433492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.433622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.433651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.433901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.433931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.434150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.434179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.975 qpair failed and we were unable to recover it. 00:29:03.975 [2024-07-15 11:39:47.434467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.975 [2024-07-15 11:39:47.434497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.434758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.434788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.434921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.434950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.435234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.435265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.435480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.435509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.435716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.435745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.436023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.436052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.436246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.436276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.436524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.436554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.436762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.436792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.436998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.437027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.437218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.437275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.437411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.437440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.437640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.437670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.437946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.437997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.438209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.438250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.438381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.438411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.438549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.438577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.438758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.438787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.438905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.438934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.439125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.439153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.439454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.439484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.439677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.439705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.439955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.439984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.440242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.440272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.440471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.440499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.440634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.440662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.440895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.440932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.441066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.441096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.441373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.441403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.441596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.441625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.441740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.441769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.442048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.442076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.442351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.442381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.442584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.442612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.442898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.442927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.443075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.443103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.443330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.443359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.443485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.443514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.443664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.443692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.443896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.443925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.976 qpair failed and we were unable to recover it. 00:29:03.976 [2024-07-15 11:39:47.444068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.976 [2024-07-15 11:39:47.444098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.444288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.444318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.444518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.444547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.444683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.444712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.444836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.444865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.445060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.445088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.445303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.445333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.445610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.445860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.445889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.446071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.446099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.446352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.446382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.446661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.446690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.446970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.447221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.447265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.447521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.447550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.447782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.447812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.447947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.447976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.448219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.448257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.448384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.448414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.448665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.448694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.448908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.448937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.449223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.449266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.449438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.449467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.449675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.449705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.449899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.450061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.450090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.450245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.450281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.450483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.450512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.450699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.450728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.451007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.451036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.451173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.451202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.451412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.451442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.451740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.451770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.451980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.452009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.452199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.452237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.452381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.452411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.452620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.452649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.452865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.452895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.453078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.453108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.453246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.453277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.453407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.453437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.977 qpair failed and we were unable to recover it. 00:29:03.977 [2024-07-15 11:39:47.453696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.977 [2024-07-15 11:39:47.453724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.453921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.453950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.454148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.454177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.454434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.454464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.454738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.454767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.454967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.455183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.455211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.455440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.455470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.455652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.455870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.455899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.456122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.456151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.456438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.456468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.456678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.456720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.457017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.457047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.457300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.457332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.457532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.457561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.457832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.457861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.458111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.458140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.458337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.458367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.458493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.458522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.458739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.458972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.459001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.459152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.459181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.459398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.459428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.459678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.459833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.459870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.460040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.460070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.460322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.460352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.460647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.460677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.460872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.460901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.461116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.461145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.461294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.461324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.461537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.461566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.461686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.461715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.461935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.978 [2024-07-15 11:39:47.461964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.978 qpair failed and we were unable to recover it. 00:29:03.978 [2024-07-15 11:39:47.462090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.462119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.462314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.462345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.462552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.462581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.462803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.462833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.463023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.463053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.463331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.463361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.463494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.463523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.463804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.463833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.464046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.464075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.464193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.464223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.464417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.464446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.464671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.464700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.464904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.464934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.465074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.465102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.465335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.465365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.465494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.465524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.465656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.465685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.465899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.465937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.466219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.466261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.466463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.466492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.466625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.466654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.466864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.466893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.467093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.467122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.467245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.467275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.467412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.467441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.467692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.467720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.467987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.468016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.468138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.468167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.468471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.468697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.468727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.468996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.469025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.469252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.469282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.469478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.469508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.469758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.469787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.979 [2024-07-15 11:39:47.469959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.979 [2024-07-15 11:39:47.469988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.979 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.470189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.470218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.470425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.470455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.470638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.470668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.470807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.470836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.471085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.471115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.471323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.471353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.471553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.471582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.471781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.471810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.471938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.471968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.472098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.472132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.472342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.472372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.472589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.472618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.472845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.472874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.473172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.473201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.473438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.473468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.473603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.473632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.473833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.473863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.474053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.474082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.474331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.474361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.474648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.474678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.474950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.475182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.475212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.475404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.475434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.475641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.475670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.475904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.475934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.476120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.476148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.476292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.476322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.476453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.476482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.476684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.476713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.476902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.476931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.477066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.477095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.477298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.477328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.477465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.477494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.477704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.477733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.477982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.478011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.478124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.478153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.478333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.478369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.478582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.478611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.478813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.478842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.980 qpair failed and we were unable to recover it. 00:29:03.980 [2024-07-15 11:39:47.479062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.980 [2024-07-15 11:39:47.479091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.479319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.479349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.479484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.479514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.479715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.479743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.479913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.479942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.480093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.480122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.480320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.480350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.480545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.480574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.480826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.480855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.480984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.481014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.481208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.481411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.481441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.481625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.481654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.481902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.481931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.482184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.482214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.482409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.482439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.482668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.482698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.482867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.482896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.483080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.483109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.483305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.483336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.483471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.483499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.483774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.483803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.484000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.484030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.484223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.484263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.484389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.484418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.484669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.484698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.484968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.484997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.485222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.485260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.485445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.485473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.485615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.485644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.485849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.485879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.486138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.486167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.486372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.486402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.486683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.486713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.486848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.486877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.487129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.487158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.487390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.487421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.487546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.487576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.487786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.487821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.488007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.488036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.488325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.488373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.981 [2024-07-15 11:39:47.488588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.981 [2024-07-15 11:39:47.488617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.981 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.488829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.488859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.489147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.489176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.489311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.489341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.489528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.489558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.489681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.489848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.489878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.490011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.490040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.490237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.490267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.490477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.490506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.490630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.490664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.490945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.491234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.491265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.491451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.491480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.491623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.491652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.491776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.491804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.492076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.492106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.492251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.492282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.492426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.492455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.492719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.492747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.492934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.492963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.493094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.493122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.493371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.493401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.493630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.493659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.493816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.493845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.494051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.494080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.494334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.494364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.494551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.494579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.494772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.494801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.494987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.495016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.495240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.495269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.495375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.495405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.495613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.495641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.495783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.495812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.496013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.496042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.496325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.496355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.496494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.496524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.496808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.497037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.497066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.497302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.497334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.497566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.497595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.982 [2024-07-15 11:39:47.497846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.982 [2024-07-15 11:39:47.497875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.982 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.498143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.498172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.498365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.498395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.498645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.498674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.498873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.498902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.499105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.499133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.499304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.499334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.499626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.499655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.499855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.499884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.500085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.500120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.500327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.500357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.500558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.500588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.500805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.500834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.501015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.501043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.501183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.501212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.501424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.501454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.501649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.501678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.501891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.501920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.502049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.502078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.502232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.502389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.502419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.502617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.502646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.502916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.502945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.503152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.503181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.503376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.503406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.503598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.503627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.503851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.503880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.504074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.504103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.504324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.504354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.504553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.504583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.504764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.504792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.505036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.505066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.505248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.505277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.505459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.505488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.983 qpair failed and we were unable to recover it. 00:29:03.983 [2024-07-15 11:39:47.505762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.983 [2024-07-15 11:39:47.505791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.505979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.506291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.506327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.506608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.506637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.506832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.506861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.507048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.507077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.507328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.507358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.507529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.507558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.507741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.507769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.507966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.507994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.508199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.508240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.508441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.508469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.508609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.508637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.508774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.508803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.508988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.509016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.509210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.509247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.509502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.509531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.509650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.509678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.509896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.509925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.510056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.510085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.510282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.510312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.510443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.510471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.510655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.510684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.510995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.511024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.511245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.511274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.511551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.511581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.511776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.511805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.512002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.512031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.512172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.512201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.512467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.512497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.512699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.512727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.512925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.512954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.513082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.513110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.513241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.513272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.513467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.513496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.513636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.513665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.513935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.513963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.514205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.514244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.514369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.514398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.514627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.984 [2024-07-15 11:39:47.514656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.984 qpair failed and we were unable to recover it. 00:29:03.984 [2024-07-15 11:39:47.514788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.514817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.515038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.515067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.515311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.515346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.515622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.515651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.515791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.515819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.516004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.516032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.516305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.516335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.516591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.516619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.516819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.516848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.517103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.517132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.517354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.517383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.517611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.517640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.517827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.517856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.518029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.518058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:03.985 [2024-07-15 11:39:47.518263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-07-15 11:39:47.518291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:03.985 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.518494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.518736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.518765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.519067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.519287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.519513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.519680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.519854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.519990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.520017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.520218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.520270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.520407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.264 [2024-07-15 11:39:47.520667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.264 [2024-07-15 11:39:47.520696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.264 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.520833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.521055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.521083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.521283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.521312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.521519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.521548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.521676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.521706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.521941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.521973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.522177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.522206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.522425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.522456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.522640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.522669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.522850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.522878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.523152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.523180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.523465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.523495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.523636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.523666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.523880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.523908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.524113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.524141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.524417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.524446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.524633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.524666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.524882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.524910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.525108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.525136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.525396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.525425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.525683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.525711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.525915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.525943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.526195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.526246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.526550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.526580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.526723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.526752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.526906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.526934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.527056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.527084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.527220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.527258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.527452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.527480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.527728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.527756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.527962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.527990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.528247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.528277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.528432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.528459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.528563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.528591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.528844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.528874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.529122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.529150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.529288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.529318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.529612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.529641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.529826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.529854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.265 [2024-07-15 11:39:47.529984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.265 [2024-07-15 11:39:47.530012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.265 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.530268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.530298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.530568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.530597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.530798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.530826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.531039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.531067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.531195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.531231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.531432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.531461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.531591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.531619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.531867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.531896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.532080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.532108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.532308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.532338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.532478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.532506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.532635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.532663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.532970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.532999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.533186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.533214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.533369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.533398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.533522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.533550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.533702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.533740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.533933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.533962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.534216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.534267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.534400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.534428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.534542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.534570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.534693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.534720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.534864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.534893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.535080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.535108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.535310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.535339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.535520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.535547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.535726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.535756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.535955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.535983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.536175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.536203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.536415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.536444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.536633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.536661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.536881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.536909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.537096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.537124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.537310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.537340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.537566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.537595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.537736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.537763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.538013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.538041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.538267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.538296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.538426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.538454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.538647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.266 [2024-07-15 11:39:47.538677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.266 qpair failed and we were unable to recover it. 00:29:04.266 [2024-07-15 11:39:47.538901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.538929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.539076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.539105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.539295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.539323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.539518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.539545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.539749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.539777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.539967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.539995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.540249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.540278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.540462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.540491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.540645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.540672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.540925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.540954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.541072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.541099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.541328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.541358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.541488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.541516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.541655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.541797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.541825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.542071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.542100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.542288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.542550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.542579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.542758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.542788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.542917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.542945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.543154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.543182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.543464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.543494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.543686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.543714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.543921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.543949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.544170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.544198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.544389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.544419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.544623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.544652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.544805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.544833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.545018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.545046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.545245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.545275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.545408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.545436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.545654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.545684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.545879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.545907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.546158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.546187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.546462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.546492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.546771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.546800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.547003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.547032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.547160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.547189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.547424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.547454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.547681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.547709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.547898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.267 [2024-07-15 11:39:47.547927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.267 qpair failed and we were unable to recover it. 00:29:04.267 [2024-07-15 11:39:47.548111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.548140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.548391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.548421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.548611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.548641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.548840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.548870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.549143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.549172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.549331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.549359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.549479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.549507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.549703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.549732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.549918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.549947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.550129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.550157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.550354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.550383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.550585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.550613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.550833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.550860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.551043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.551072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.551270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.551299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.551506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.551540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.551812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.551840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.552037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.552066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.552315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.552345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.552541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.552569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.552974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.553001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.553261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.553290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.553434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.553462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.553591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.553620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.553768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.553796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.553980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.554008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.554258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.554288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.554562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.554590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.554865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.554894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.555142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.555171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.555378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.555408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.555541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.555569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.555812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.555840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.556041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.556070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.556255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.556285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.268 [2024-07-15 11:39:47.556394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.268 [2024-07-15 11:39:47.556421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.268 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.556692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.556720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.556995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.557023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.557168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.557195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.557400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.557429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.557696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.557724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.557947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.557976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.558250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.558280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.558426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.558454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.558641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.558669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.558959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.558987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.559167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.559196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.559425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.559455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.559637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.559666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.559855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.559883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.560143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.560172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.560433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.560463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.560691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.560721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.561001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.561029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.561234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.561269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.561421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.561450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.561635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.561664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.561803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.561831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.562014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.562044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.562320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.562350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.562600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.562630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.562873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.562902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.563181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.563210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.563418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.563447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.563722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.563934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.563963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.564094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.564123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.564373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.564403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.564607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.564636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.564836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.269 [2024-07-15 11:39:47.564865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.269 qpair failed and we were unable to recover it. 00:29:04.269 [2024-07-15 11:39:47.564999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.565028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.565167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.565196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.565404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.565435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.565707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.565735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.565920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.565949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.566131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.566160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.566362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.566391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.566618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.566647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.566865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.566894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.567091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.567119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.567300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.567329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.567619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.567649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.567766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.567794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.567976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.568214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.568435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.568645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.568797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.568954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.568984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.569153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.569180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.569392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.569421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.569623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.569652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.569834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.569863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.570065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.570094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.570265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.570300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.570523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.570751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.570779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.571029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.571058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.571310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.571339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.571455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.571482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.571680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.571709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.571840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.571868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.572069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.572098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.572278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.572307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.572570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.572599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.572811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.572839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.573093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.573122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.573400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.573429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.573543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.573571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.573766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.270 [2024-07-15 11:39:47.573794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.270 qpair failed and we were unable to recover it. 00:29:04.270 [2024-07-15 11:39:47.573993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.574022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.574146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.574174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.574446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.574475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.574594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.574621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.574805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.574832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.575028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.575057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.575257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.575287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.575540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.575569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.575842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.575870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.575985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.576014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.576306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.576335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.576468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.576510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.576724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.576754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.576971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.576999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.577265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.577296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.577450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.577479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.577694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.577723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.577929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.577958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.578178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.578207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.578347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.578377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.578520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.578549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.578741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.578772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.578982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.579010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.579292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.579618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.579776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.579805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.579938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.579967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.580094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.580123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.580258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.580287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.580489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.580518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.580640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.580669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.580861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.580889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.581021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.581050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.581183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.581211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.581405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.581434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.581730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.581758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.581953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.581981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.582187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.582216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.582510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.582542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.271 [2024-07-15 11:39:47.582745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.271 [2024-07-15 11:39:47.582774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.271 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.582906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.582934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.583148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.583176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.583447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.583477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.583614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.583642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.583936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.583965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.584262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.584293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.584478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.584507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.584782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.584810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.585061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.585089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.585338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.585368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.585618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.585847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.585885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.586037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.586066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.586318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.586348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.586630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.586659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.586862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.586892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.587023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.587052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.587242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.587271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.587418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.587645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.587873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.587901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.588095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.588125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.588258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.588287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.588565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.588592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.588793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.588821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.589948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.589977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.590119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.590148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.590405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.590435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.590555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.590585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.590836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.590865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.591048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.591077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.591355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.591522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.591552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.591682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.591717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.591855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.272 [2024-07-15 11:39:47.591884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.272 qpair failed and we were unable to recover it. 00:29:04.272 [2024-07-15 11:39:47.592023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.592053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.592307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.592356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.592531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.592561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.592753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.592782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.593003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.593032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.593218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.593255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.593511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.593541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.593750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.593780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.593974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.594003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.594184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.594213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.594414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.594444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.594664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.594694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.594904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.594934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.595190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.595219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.595450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.595481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.595684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.595714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.595962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.595991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.596245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.596276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.596419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.596449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.596578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.596608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.596793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.596822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.597016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.597045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.597261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.597291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.597493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.597523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.597720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.597749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.597952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.597981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.598192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.598222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.598363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.598393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.598658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.598811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.598841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.598977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.599006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.599137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.599166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.273 [2024-07-15 11:39:47.599448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.273 [2024-07-15 11:39:47.599479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.273 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.599733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.599761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.599962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.600245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.600276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.600493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.600523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.600667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.600890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.600924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.601128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.601158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.601354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.601384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.601636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.601665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.601838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.601867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.602000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.602029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.602256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.602287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.602493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.602522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.602716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.602745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.602944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.602973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.603106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.603134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.603266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.603295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.603518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.603547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.603750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.603779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.603924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.603953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.604205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.604241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.604380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.604409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.604542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.604571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.604770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.604799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.604999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.605028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.605320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.605351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.605556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.605585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.605801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.605831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.606028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.606057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.606280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.606518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.606547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.606754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.606783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.606983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.607012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.607207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.607244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.607430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.607459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.607656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.607684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.607937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.607966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.608168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.608197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.608408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.274 [2024-07-15 11:39:47.608440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.274 qpair failed and we were unable to recover it. 00:29:04.274 [2024-07-15 11:39:47.608628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.608852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.608881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.609062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.609091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.609214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.609249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.609458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.609487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.609682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.609711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.609915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.609949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.610147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.610176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.610369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.610400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.610607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.610635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.610773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.610801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.610938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.610966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.611250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.611281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.611481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.611510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.611696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.611725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.611939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.611968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.612168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.612196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.612432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.612464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.612616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.612646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.612789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.612818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.613018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.613047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.613247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.613277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.613415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.613444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.613632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.613660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.613847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.613875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.614096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.614328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.614358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.614489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.614517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.614774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.614803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.614920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.614949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.615078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.615108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.615358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.615536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.615564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.615788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.615817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.616013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.616042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.616255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.616285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.616542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.616571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.616775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.616803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.616948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.616977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.275 [2024-07-15 11:39:47.617200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.275 [2024-07-15 11:39:47.617239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.275 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.617426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.617455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.617645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.617673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.617872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.617900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.618095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.618124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.618308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.618338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.618588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.618617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.618816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.618850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.619052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.619081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.619305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.619336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.619532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.619560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.619696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.619725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.619911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.619940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.620136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.620164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.620328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.620359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.620544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.620573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.620718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.620746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.620879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.621109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.621138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.621280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.621309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.621560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.621589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.621791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.621820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.621963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.621992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.622173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.622203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.622462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.622491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.622681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.622710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.622860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.622890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.623143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.623172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.623477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.623508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.623739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.623768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.623913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.623942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.624193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.624222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.624437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.624467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.624584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.624612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.624818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.624847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.624979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.625008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.625213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.625251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.625526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.625556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.625824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.625853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.626079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.626107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.626310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.276 [2024-07-15 11:39:47.626339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.276 qpair failed and we were unable to recover it. 00:29:04.276 [2024-07-15 11:39:47.626589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.626618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.626814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.626843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.627094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.627123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.627322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.627351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.627564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.627593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.627848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.627877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.628019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.628053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.628193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.628222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.628436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.628650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.628679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.628883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.628912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.629129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.629158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.629408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.629437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.629572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.629601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.629795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.629824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.629967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.629997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.630208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.630248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.630440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.630468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.630657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.630686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.630937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.630965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.631244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.631274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.631495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.631525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.631668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.631698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.631968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.631997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.632203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.632251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.632503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.632533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.632719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.632748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.632940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.632969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.633103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.633132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.633346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.633376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.633512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.633542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.277 [2024-07-15 11:39:47.633791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.277 [2024-07-15 11:39:47.633820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.277 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.634002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.634032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.634218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.634255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.634452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.634481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.634759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.634788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.635049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.635078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.635238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.635268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.635550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.635579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.635788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.635816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.636068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.636098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.636347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.636376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.636673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.636702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.636902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.636931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.637054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.637083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.637323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.637354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.637638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.637676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.637932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.637961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.638174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.638203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.638430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.638460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.638671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.638701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.638965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.638994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.639114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.639143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.639373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.639403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.639550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.639580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.639789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.639818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.639951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.639980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.640176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.640204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.640482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.640513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.640738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.640767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.640960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.640989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.641195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.641234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.641437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.641466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.641739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.641768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.278 qpair failed and we were unable to recover it. 00:29:04.278 [2024-07-15 11:39:47.641995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-07-15 11:39:47.642025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.642272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.642302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.642521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.642550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.642678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.642707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.642958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.642987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.643250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.643280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.643481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.643510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.643645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.643674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.643857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.643887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.644090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.644119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.644310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.644341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.644525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.644555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.644780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.644809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.644994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.645023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.645244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.645274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.645477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.645507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.645787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.645816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.646008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.646037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.646168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.646197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.646462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.646502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.646641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.646670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.646872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.646902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.647041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.647076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.647208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.647249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.647524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.647709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.647738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.647929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.648073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.648103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.648250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.648282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.648516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.648546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.648820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.648849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.649099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.649128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.649323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.649354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.649609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.649639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.649890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.649919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.650041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.650070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.650270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.650300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.650508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.650537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.650757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.650787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.650920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.650950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.279 qpair failed and we were unable to recover it. 00:29:04.279 [2024-07-15 11:39:47.651104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-07-15 11:39:47.651133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.651320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.651350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.651622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.651651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.651848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.651878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.652940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.652970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.653152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.653470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.653501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.653684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.653714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.653927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.653956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.654094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.654124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.654393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.654424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.654708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.654738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.655014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.655044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.655242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.655272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.655525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.655555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.655758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.655788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.655981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.656242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.656425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.656601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.656814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.656954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.656984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.657101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.657131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.657270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.657301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.657555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.657585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.657715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.657744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.657969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.657999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.658262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.658294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.658449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.658479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.658613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.658642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.658789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.658818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.659101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.659130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.659378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.659408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.659603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.659633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.659845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.659873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.660130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-07-15 11:39:47.660159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.280 qpair failed and we were unable to recover it. 00:29:04.280 [2024-07-15 11:39:47.660310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.660340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.660527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.660557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.660679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.660708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.660955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.660984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.661124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.661154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.661370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.661399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.661628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.661657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.661842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.661871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.662071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.662104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.662312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.662342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.662477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.662507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.662730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.662758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.662943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.662971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.663223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.663261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.663483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.663512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.663647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.663675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.663819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.663847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.664044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.664073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.664325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.664355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.664507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.664536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.664665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.664692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.664890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.664924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.665121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.665150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.665338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.665368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.665561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.665871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.665900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.666031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.666059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.666189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.666217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.666432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.666462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.666662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.666690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.666831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.666860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.667059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.667086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.667268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.667297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.667575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.667604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.667811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.667841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.668048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.668077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.668325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.668354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.668548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.668576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.668712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.668741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.668923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.668951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.669203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.281 [2024-07-15 11:39:47.669239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.281 qpair failed and we were unable to recover it. 00:29:04.281 [2024-07-15 11:39:47.669467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.669496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.669770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.669798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.669945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.669972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.670163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.670191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.670383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.670412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.670637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.670666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.670915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.670944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.671174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.671218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.671446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.671477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.671699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.671729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.672000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.672030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.672308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.672338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.672537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.672567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.672825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.672854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.672986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.673015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.673220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.673260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.673390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.673419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.673694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.673724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.673920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.673949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.674241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.674272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.674525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.674835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.674864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.675114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.675142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.675348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.675378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.675512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.675541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.675772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.675801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.675943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.675971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.676104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.676133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.676331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.676361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.676563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.676592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.676789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.676818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.677067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.677095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.677284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.677314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.282 qpair failed and we were unable to recover it. 00:29:04.282 [2024-07-15 11:39:47.677588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.282 [2024-07-15 11:39:47.677617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.677748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.677782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.677986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.678015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.678315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.678344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.678471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.678500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.678686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.678715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.678900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.678929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.679117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.679146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.679328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.679358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.679610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.679639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.679787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.679816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.679999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.680028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.680297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.680326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.680527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.680557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.680810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.680839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.680983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.681013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.681284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.681314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.681517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.681546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.681787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.681816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.682066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.682095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.682307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.682337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.682487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.682516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.682720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.682749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.682971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.683001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.683248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.683278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.683467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.683497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.683746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.683775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.683978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.684007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.684202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.684245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.684448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.684477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.684612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.684642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.684919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.684948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.685096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.685125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.685374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.685597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.685626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.685735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.685764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.686039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.686069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.686299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.686328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.686530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.686560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.686692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.686722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.686997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.687027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.283 qpair failed and we were unable to recover it. 00:29:04.283 [2024-07-15 11:39:47.687300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.283 [2024-07-15 11:39:47.687330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.687447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.687477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.687623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.687652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.687792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.687821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.688019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.688049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.688252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.688282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.688469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.688498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.688623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.688652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.688765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.688795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.689044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.689074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.689347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.689377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.689524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.689553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.689760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.689789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.690013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.690042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.690260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.690295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.690428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.690457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.690676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.690705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.690903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.690932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.691941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.691970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.692115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.692145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.692421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.692450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.692733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.692762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.692896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.692925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.693117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.693146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.693397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.693427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.693639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.693668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.693875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.693904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.694098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.694127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.694272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.694302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.694573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.694797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.694825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.695117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.695146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.695276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.695306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.695505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.695535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.695753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.695782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.695929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.284 [2024-07-15 11:39:47.695958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.284 qpair failed and we were unable to recover it. 00:29:04.284 [2024-07-15 11:39:47.696151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.696180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.696421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.696451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.696585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.696890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.696919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.697186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.697215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.697442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.697472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.697610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.697640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.697898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.697927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.698177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.698206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.698503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.698534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.698673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.698703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.698926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.698955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.699098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.699127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.699313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.699344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.699626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.699660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.699847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.699879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.700020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.700050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.700259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.700290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.700540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.700568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.700769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.700798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.701000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.701030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.701295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.701324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.701459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.701488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.701637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.701667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.701863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.701892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.702163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.702193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.702356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.702385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.702637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.702672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.702889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.702919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.703072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.703101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.703259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.703289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.703471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.703500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.703705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.703734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.703923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.703952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.704093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.704123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.704264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.704295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.704550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.704579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.704774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.704803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.704988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.705296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.705325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.285 [2024-07-15 11:39:47.705595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.285 [2024-07-15 11:39:47.705625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.285 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.705853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.705882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.706135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.706165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.706294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.706324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.706525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.706555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.706855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.706885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.707070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.707099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.707294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.707324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.707512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.707541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.707732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.707762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.707984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.708014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.708124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.708153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.708359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.708390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.708663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.708693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.708891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.708925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.709134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.709163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.709362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.709392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.709615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.709643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.709841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.709870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.710150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.710178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.710391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.710422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.710748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.710777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.710980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.711009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.711232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.711262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.711531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.711560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.711754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.711783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.711984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.712013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.712198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.712235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.712363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.712391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.712583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.286 [2024-07-15 11:39:47.712611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.286 qpair failed and we were unable to recover it. 00:29:04.286 [2024-07-15 11:39:47.712798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.712825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.713036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.713065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.713246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.713276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.713483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.713512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.713707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.713735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.713927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.713955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.714090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.714119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.714336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.714365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.714612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.714641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.287 [2024-07-15 11:39:47.714831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.287 [2024-07-15 11:39:47.714860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.287 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.714970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.714998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.715135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.715163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.715377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.715407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.715683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.715712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.715847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.715875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.716126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.716299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.716328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.716586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.716614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.716759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.716786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.716929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.716957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.717142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.717171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.717386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.717545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.717573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.717694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.717722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.717852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.717885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.718057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.718085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.718289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.718318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.718528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.718556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.718811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.718842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.718986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.719014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.719264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.719294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.719523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.719552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.719828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.719856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.720038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.720066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.720268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.720297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.720547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.720576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.720828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.720856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.721036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.721065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.721202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.721251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.721523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.721552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.721795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.721824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.721966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.721995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.722261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.722292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.722478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.722507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.722782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.722811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.723011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.723040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.723301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.723331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.723531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.723561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.723810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.723839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.288 [2024-07-15 11:39:47.724089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.288 [2024-07-15 11:39:47.724118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.288 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.724269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.724298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.724503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.724532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.724808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.724837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.725039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.725068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.725269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.725298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.725589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.725617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.725812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.725840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.726089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.726118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.726316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.726346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.726564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.726882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.726910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.727109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.727394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.727424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.727617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.727646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.727853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.728073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.728102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.728236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.728265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.728514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.728543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.728686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.728714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.728897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.728926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.729119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.729147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.729421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.729450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.729650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.729679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.729807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.729834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.730104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.730132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.730365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.730394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.730606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.730634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.730817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.730845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.731122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.731150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.731336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.731365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.731500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.731529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.731723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.731752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.731890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.731918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.732064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.732092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.732312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.732342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.732611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.732639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.732808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.732837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.733040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.733069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.733268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.733297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.733486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.289 [2024-07-15 11:39:47.733515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.289 qpair failed and we were unable to recover it. 00:29:04.289 [2024-07-15 11:39:47.733712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.733740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.733996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.734025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.734303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.734333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.734529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.734558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.734807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.735023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.735051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.735325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.735355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.735540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.735568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.735783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.735811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.736009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.736037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.736300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.736329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.736481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.736509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.736637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.736666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.736923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.736951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.737161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.737195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.737252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417000 (9): Bad file descriptor 00:29:04.290 [2024-07-15 11:39:47.737481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.737516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.737705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.737735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.737932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.737961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.738143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.738172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.738323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.738354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.738567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.738596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.738797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.738827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.738967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.738997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.739271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.739301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.739484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.739513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.739668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.739698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.739823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.739852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.740056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.740088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.740249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.740278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.740538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.740760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.740788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.740904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.740932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.741080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.741109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.741316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.741347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.741549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.741577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.741830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.741858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.742120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.742149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.742336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.742366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.742571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.742600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.742733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.290 [2024-07-15 11:39:47.742761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.290 qpair failed and we were unable to recover it. 00:29:04.290 [2024-07-15 11:39:47.742952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.742991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.743190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.743219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.743443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.743472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.743675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.743704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.743981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.744010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.744211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.744256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.744445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.744474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.744687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.744716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.744914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.744943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.745135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.745460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.745490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.745696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.745725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.745916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.745944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.746143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.746372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.746401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.746650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.746677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.746874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.746903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.747125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.747153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.747419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.747449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.747734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.747763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.748050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.748078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.748283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.748314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.748558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.748587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.748838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.748867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.749066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.749095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.749306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.749337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.749586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.749614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.749776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.749812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.750029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.750058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.750246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.750277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.750482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.750510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.750703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.750732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.750878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.750907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.751157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.751186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.751447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.751476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.751615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.751643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.751772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.751801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.751990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.752019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.291 qpair failed and we were unable to recover it. 00:29:04.291 [2024-07-15 11:39:47.752200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.291 [2024-07-15 11:39:47.752236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.752433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.752463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.752689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.752725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.752867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.752896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.753191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.753220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.753477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.753506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.753711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.753741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.753950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.753978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.754099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.754128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.754328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.754359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.754554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.754583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.754770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.754799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.754913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.754942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.755140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.755169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.755361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.755391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.755609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.755637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.755829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.755858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.756068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.756096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.756298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.756328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.756589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.756617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.756905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.756934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.757153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.757182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.757486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.757515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.757635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.757665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.757875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.757903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.758174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.758203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.758463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.758493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.758791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.758913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.758941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.759088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.759128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.759379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.759413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.759619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.759649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.759827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.759856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.760046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.760075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.760307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.760338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.760496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.760525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.760714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.292 [2024-07-15 11:39:47.760743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.292 qpair failed and we were unable to recover it. 00:29:04.292 [2024-07-15 11:39:47.760924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.760954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.761138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.761167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.761370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.761400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.761596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.761625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.761767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.761796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.762075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.762104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.762382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.762413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.762547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.762576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.762763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.762792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.762989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.763018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.763218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.763258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.763528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.763557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.763691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.763720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.763872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.763901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.764120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.764150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.764279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.764309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.764516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.764544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.764696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.764724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.764861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.764889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.765007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.765046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.765168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.765197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.765348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.765381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.765636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.765665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.765839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.765869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.766089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.766117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.766366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.766396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.766592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.766621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.766769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.766968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.766996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.767197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.767235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.767458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.767487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.767788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.767817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.768067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.768096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.768286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.768584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.768613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.768752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.768781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.768920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.768950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.769094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.769122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.769394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.769424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.769614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.769643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.769828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.769857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.293 qpair failed and we were unable to recover it. 00:29:04.293 [2024-07-15 11:39:47.769995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.293 [2024-07-15 11:39:47.770024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.770138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.770166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.770378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.770407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.770591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.770619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.770821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.770850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.771051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.771080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.771296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.771326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.771576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.771605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.771858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.771887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.772105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.772134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.772334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.772363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.772578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.772607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.772861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.772890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.773019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.773047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.773242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.773271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.773444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.773474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.773669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.773698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.773886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.773915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.774114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.774149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.774354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.774383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.774571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.774599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.774732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.774760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.774899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.774927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.775213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.775249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.775421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.775451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.775747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.775775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.775975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.776004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.776137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.776165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.776372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.776401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.776613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.776641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.776767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.776796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.777001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.777029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.777318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.777348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.777466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.777495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.777743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.777771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.777972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.777999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.778157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.778404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.778702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.778730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.779011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.779039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.779245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.779274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.294 qpair failed and we were unable to recover it. 00:29:04.294 [2024-07-15 11:39:47.779498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.294 [2024-07-15 11:39:47.779528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.779725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.779753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.780049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.780078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.780300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.780330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.780538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.780567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.780706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.780735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.780943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.780973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.781112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.781140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.781285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.781315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.781497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.781526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.781681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.781710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.781981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.782011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.782263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.782293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.782489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.782659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.782686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.782831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.783109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.783138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.783332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.783368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.783498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.783526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.783728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.783756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.784009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.784179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.784207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.784412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.784442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.784640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.784669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.784866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.784895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.785111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.785140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.785272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.785301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.785506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.785535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.785734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.785762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.785958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.785986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.786181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.786212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.786513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.786543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.786679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.786864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.786892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.787032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.787061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.787259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.787497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.787527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.787663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.787692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.787951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.787979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.788182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.788211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.788339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.788368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.295 qpair failed and we were unable to recover it. 00:29:04.295 [2024-07-15 11:39:47.788497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.295 [2024-07-15 11:39:47.788525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.788718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.788746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.788938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.788968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.789233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.789269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.789404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.789434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.789637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.789666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.789929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.789958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.790144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.790414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.790444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.790575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.790604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.790809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.790838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.791034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.791063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.791200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.791237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.791436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.791465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.791638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.791667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.791945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.791973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.792172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.792201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.792415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.792445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.792619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.792649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.792855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.792884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.793139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.793169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.793371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.793402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.793607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.793636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.793767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.793797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.794004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.794033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.794173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.794203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.794474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.794504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.794697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.794727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.794859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.794889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.795000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.795029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.795258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.795292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.795572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.795601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.795733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.795762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.795885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.795913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.796115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.296 [2024-07-15 11:39:47.796143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.296 qpair failed and we were unable to recover it. 00:29:04.296 [2024-07-15 11:39:47.796334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.796363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.796549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.796578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.796774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.796803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.797102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.797132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.797267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.797296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.797589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.797617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.797806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.797835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.798070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.798099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.798236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.798267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.798504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.798534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.798752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.798782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.799079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.799107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.799248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.799278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.799409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.799438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.799624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.799653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.799849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.799878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.800079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.800107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.800241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.800282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.800515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.800543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.800728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.800755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.800957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.800986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.801114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.801143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.801281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.801311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.801454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.801484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.801666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.801695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.801969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.801998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.802123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.802151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.802350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.802632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.802661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.802867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.802895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.803101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.803131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.803394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.803424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.803623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.803652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.803901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.804138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.804166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.804410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.804444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.804556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.804584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.804849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.804878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.805063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.805091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.297 [2024-07-15 11:39:47.805297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.297 [2024-07-15 11:39:47.805327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.297 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.805454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.805483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.805686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.805716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.805991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.806020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.806272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.806302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.806502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.806531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.806720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.806750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.806894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.807103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.807131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.807252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.807282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.807497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.807527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.807721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.807751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.807948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.807976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.808112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.808141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.808358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.808389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.808516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.808545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.808673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.808701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.808837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.808866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.809858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.809886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.810144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.810173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.810455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.810735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.810765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.811080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.811109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.811393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.811424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.811619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.811648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.811896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.811925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.812123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.812152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.812332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.812363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.812564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.812593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.812780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.812808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.812993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.813022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.813222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.813264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.813460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.813490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.813686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.813714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.813994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.814023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.298 qpair failed and we were unable to recover it. 00:29:04.298 [2024-07-15 11:39:47.814322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.298 [2024-07-15 11:39:47.814352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.814524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.814553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.814750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.814780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.814967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.814996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.815180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.815209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.815363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.815393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.815681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.815711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.815827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.815857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.816065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.816108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.816314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.816347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.816633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.816664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.816814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.816843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.817016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.817045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.817196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.817235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.817472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.817502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.817723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.817753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.817879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.817908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.818128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.818158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.818390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.818421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.818568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.818598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.818739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.818769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.818971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.819223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.819557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.819703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.819960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.819991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.820206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.820248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.820419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.820449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.820587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.820617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.820811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.820841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.821023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.821053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.821244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.821275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.821547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.821576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.821834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.821863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.821989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.822018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.822205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.822261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.822385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.822694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.822723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.822848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.822877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.299 qpair failed and we were unable to recover it. 00:29:04.299 [2024-07-15 11:39:47.823058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.299 [2024-07-15 11:39:47.823087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.823222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.823262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.823460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.823490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.823750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.823781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.823982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.824012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.824204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.824242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.824438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.824468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.824668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.824697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.824973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.825002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.825151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.825180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.825461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.825696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.825726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.825862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.825892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.826011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.826040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.826246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.826278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.826399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.826429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.826561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.826590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.826838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.826867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.827078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.827107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.827247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.827278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.827501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.827531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.827654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.827683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.827932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.828172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.828202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.828386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.828417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.828611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.828640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.828765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.828795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.828997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.829027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.829300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.829329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.829579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.829608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.829800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.829830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.830028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.830058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.830255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.830286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.300 [2024-07-15 11:39:47.830509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.300 [2024-07-15 11:39:47.830539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.300 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.830771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.830800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.831051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.831081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.831202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.831259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.831466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.831496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.831622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.831651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.831847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.831877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.832081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.832111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.832328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.832359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.832492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.832522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.832693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.833006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.833035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.833250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.833280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.833496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.833526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.833729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.833759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.833902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.833932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.834125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.834154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.834426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.834458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.834599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.834629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.834813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.834842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.834976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.835005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.835191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.835220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.835441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.835471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.835605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.835635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.301 [2024-07-15 11:39:47.835767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.301 [2024-07-15 11:39:47.835797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.301 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.836018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.836049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.836192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.836223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.836365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.836396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.836603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.836632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.836819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.836849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.837075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.837104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.837239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.837270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.837487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.837517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.837643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.837672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.837856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.837885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.838031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.838061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.838196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.838235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.838368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.838398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.838596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.838626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.838828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.838857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.839137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.839167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.839315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.839345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.839498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.839635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.839670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.839947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.839976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.840177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.840207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.840475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.840505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.840688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.840718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.840968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.840998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.841199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.841237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.841428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.841457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.841658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.841688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.841946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.841975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.842216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.842254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.842532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.842562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.842756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.842786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.585 qpair failed and we were unable to recover it. 00:29:04.585 [2024-07-15 11:39:47.842992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.585 [2024-07-15 11:39:47.843022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.843257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.843289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.843487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.843517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.843787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.843817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.843984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.844014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.844186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.844216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.844434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.844464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.844737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.844766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.844991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.845021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.845296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.845327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.845600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.845630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.845830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.845860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.846054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.846083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.846338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.846369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.846586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.846616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.846800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.846829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.846964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.846993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.847195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.847243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.847520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.847550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.847802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.847832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.847955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.847985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.848186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.848215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.848421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.848451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.848585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.848615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.848817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.848847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.849098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.849128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.849311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.849342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.849562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.849597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.849805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.849835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.850034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.850064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.850377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.850408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.850685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.850715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.850829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.850859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.851107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.851137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.851281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.851310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.851450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.851708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.851906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.851935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.852122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.852151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.852368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.852398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.852610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.586 [2024-07-15 11:39:47.852639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.586 qpair failed and we were unable to recover it. 00:29:04.586 [2024-07-15 11:39:47.852840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.852870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.853006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.853035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.853220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.853260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.853394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.853424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.853619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.853649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.853784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.853813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.854016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.854045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.854304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.854334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.854605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.854635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.854821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.854851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.855147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.855176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.855464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.855494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.855629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.855658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.855935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.855964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.856088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.856118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.856341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.856371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.856633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.856663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.856848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.856878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.857130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.857159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.857280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.857310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.857559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.857589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.857803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.857832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.858054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.858347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.858378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.858661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.858690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.858912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.858941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.859078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.859112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.859367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.859398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.859671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.859701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.859837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.859866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.859984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.860014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.860158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.860188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.860346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.860376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.860568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.860597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.860871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.860900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.861042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.861072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.861270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.861301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.861442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.861471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.861746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.861776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.861980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.862009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.862242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.587 [2024-07-15 11:39:47.862272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.587 qpair failed and we were unable to recover it. 00:29:04.587 [2024-07-15 11:39:47.862400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.862430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.862652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.862681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.862812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.862841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.862979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.863124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.863353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.863505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.863700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.863891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.863921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.864056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.864085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.864290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.864320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.864570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.864601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.864738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.864778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.864923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.864954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.865081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.865110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.865310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.865340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.865482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.865512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.865694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.865722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.865938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.865967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.866959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.866988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.867123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.867158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.867365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.867395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.867685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.867714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.867902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.867931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.868127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.868156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.868383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.868414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.868667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.868697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.868884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.868914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.869113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.869142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.869366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.869599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.869629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.869831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.869860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.870062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.870091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.870293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.870323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.870523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.870553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.588 [2024-07-15 11:39:47.870806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.588 [2024-07-15 11:39:47.870835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.588 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.871047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.871077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.871349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.871379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.871511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.871540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.871696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.871726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.871923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.871952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.872241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.872272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.872529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.872559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.872699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.872729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.872873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.872902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.873179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.873208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.873518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.873548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.873706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.873941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.873970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.874246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.874276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.874449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.874478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.874706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.874736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.874989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.875018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.875237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.875267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.875498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.875527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.875730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.875759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.875959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.875988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.876135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.876164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.876382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.876413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.876623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.876787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.876822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.876976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.877199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.877494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.877655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.877798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.877957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.877986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.878245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.589 [2024-07-15 11:39:47.878275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.589 qpair failed and we were unable to recover it. 00:29:04.589 [2024-07-15 11:39:47.878479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.878508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.878700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.878729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.878863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.878892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.879073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.879101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.879245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.879276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.879402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.879433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.879651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.879680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.879887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.879916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.880115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.880144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.880276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.880306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.880503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.880533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.880732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.880761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.880948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.880977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.881169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.881198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.881403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.881437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.881638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.881668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.881852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.881882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.882069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.882098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.882247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.882280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.882501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.882548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.882829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.882860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.883054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.883084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.883242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.883275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.883426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.883455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.883711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.883740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.883991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.884020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.884221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.884263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.884472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.884500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.884616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.884645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.884858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.884888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.885099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.885128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.885261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.885291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.885426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.885454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.885632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.885661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.885826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.885856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.886073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.886102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.886234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.886265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.886548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.886578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.886715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.886744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.886932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.886961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.887151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.590 [2024-07-15 11:39:47.887181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.590 qpair failed and we were unable to recover it. 00:29:04.590 [2024-07-15 11:39:47.887396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.887427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.887676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.887706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.887954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.887984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.888166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.888196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.888321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.888550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.888587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.888764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.888793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.889042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.889073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.889201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.889240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.889498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.889527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.889648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.889677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.889857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.889886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.890019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.890049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.890325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.890355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.890563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.890593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.890888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.890918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.891140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.891169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.891379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.891411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.891705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.891735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.891926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.891956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.892150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.892179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.892304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.892334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.892528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.892558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.892709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.892740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.892953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.892983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.893179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.893208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.893439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.893469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.893622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.893651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.893849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.893878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.894010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.894039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.894235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.894264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.894420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.894449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.894649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.894683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.894834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.894863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.895083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.895112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.895253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.895284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.895498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.895527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.895732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.895762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.895903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.895931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.896040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.896070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.896350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.896381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.591 qpair failed and we were unable to recover it. 00:29:04.591 [2024-07-15 11:39:47.896578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.591 [2024-07-15 11:39:47.896608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.896863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.897906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.897936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.898187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.898415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.898444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.898701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.898730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.898930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.898960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.899155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.899185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.899473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.899503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.899658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.899687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.899808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.899837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.900022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.900051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.900245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.900275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.900460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.900495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.900716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.900745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.900994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.901024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.901220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.901260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.901395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.901424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.901703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.901732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.901927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.901957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.902143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.902173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.902474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.902505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.902719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.902748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.902962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.903145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.903174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.903376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.903406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.903607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.903637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.903841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.903878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.904155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.904185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.904324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.904355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.904613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.904641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.904851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.904880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.905104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.905133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.905292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.592 qpair failed and we were unable to recover it. 00:29:04.592 [2024-07-15 11:39:47.905589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.592 [2024-07-15 11:39:47.905618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.905819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.905848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.906000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.906028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.906170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.906198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.906392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.906426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.906681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.906709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.906881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.906915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.907059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.907088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.907375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.907408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.907641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.907670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.907792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.907821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.908100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.908130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.908319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.908350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.908472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.908502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.908700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.908729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.908868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.908897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.909110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.909139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.909347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.909377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.909503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.909532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.909752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.909782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.909985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.910243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.910422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.910451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.910707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.910736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.910864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.910894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.911022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.911051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.911320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.911351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.911534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.911562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.911705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.911734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.911928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.911958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.912243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.912273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.593 [2024-07-15 11:39:47.912477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.593 [2024-07-15 11:39:47.912508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.593 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.912769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.912799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.913063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.913103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.913331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.913364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.913565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.913596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.913801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.913830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.913963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.913993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.914250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.914280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.914423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.914453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.914718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.914747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.914959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.914989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.915128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.915156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.915346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.915376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.915562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.915592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.915790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.915819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.916032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.916073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.916234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.916266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.916412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.916442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.916594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.916623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.916905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.916934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.917155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.917185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.917349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.917380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.917502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.917530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.917643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.917672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.917955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.917983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.918186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.918215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.918492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.918522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.918775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.918804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.919058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.919087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.919290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.919320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.919454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.919737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.919766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.919900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.919929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.594 [2024-07-15 11:39:47.920128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.594 [2024-07-15 11:39:47.920157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.594 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.920344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.920374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.920510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.920540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.920678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.920707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.920881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.920910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.921108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.921138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.921337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.921366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.921483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.921512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.921720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.921965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.921998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.922122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.922151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.922257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.922287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.922483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.922513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.922717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.922746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.922927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.922956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.923142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.923170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.923389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.923700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.923901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.923930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.924074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.924104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.924305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.924334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.924543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.924572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.924825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.924860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.925058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.925087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.925388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.925418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.925535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.925565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.925695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.925724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.925882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.925912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.926115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.926143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.926339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.926370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.926566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.926595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.926803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.926832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.927012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.927041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.927315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.927345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.927495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.927524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.927662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.927692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.927975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.595 [2024-07-15 11:39:47.928005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.595 qpair failed and we were unable to recover it. 00:29:04.595 [2024-07-15 11:39:47.928201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.928236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.928385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.928414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.928611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.928640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.928823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.928852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.929125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.929155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.929360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.929390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.929521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.929550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.929766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.929796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.930051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.930187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.930217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.930420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.930450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.930708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.930738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.930946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.930983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.931204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.931443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.931473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.931722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.931752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.932035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.932172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.932352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.932633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.932864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.932997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.933027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.933253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.933284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.933468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.933497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.933614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.933643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.933836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.933866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.934018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.934048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.934189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.934219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.934417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.934446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.934704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.934733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.934854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.934883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.935015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.935045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.935161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.935190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.935448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.935478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.935671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.935702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.935886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.935916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-07-15 11:39:47.936030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.596 [2024-07-15 11:39:47.936059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.936197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.936234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.936354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.936383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.936563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.936597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.936740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.936769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.936905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.936934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.937117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.937147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.937279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.937310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.937442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.937472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.937699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.937728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.937905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.937934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.938205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.938243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.938514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.938543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.938821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.938850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.939101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.939130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.939270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.939503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.939533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.939787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.939816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.940012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.940041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.940156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.940185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.940480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.940510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.940699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.940728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.940945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.940973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.941256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.941287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.941491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.941521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.941712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.941741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.942025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.942054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.942188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.942218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.942372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.942404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.942551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.942580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.942804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.942839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.943013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.943043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.943275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.943304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.943444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.943473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.943612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.943641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.943838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.597 [2024-07-15 11:39:47.943867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-07-15 11:39:47.944057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.944087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.944294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.944323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.944506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.944535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.944682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.944712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.944843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.945094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.945298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.945454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.945612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.945845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.945982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.946011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.946152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.946181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.946374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.946404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.946585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.946615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.946799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.946829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.947055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.947084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.947223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.947261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.947534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.947564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.947785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.947814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.948084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.948113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.948302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.948331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.948532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.948571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.948771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.948800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.948946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.948975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.949234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.949264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.949447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.949476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.949674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.949703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.949850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.949880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-07-15 11:39:47.950060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.598 [2024-07-15 11:39:47.950090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.950216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.950255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.950389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.950419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.950691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.950721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.950931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.950961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.951186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.951215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.951442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.951471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.951679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.951714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.951867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.951896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.952071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.952100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.952352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.952382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.952504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.952533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.952784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.952813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.952945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.952974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.953124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.953153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.953361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.953391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.953671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.953700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.953964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.953993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.954246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.954492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.954522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.954717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.954752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.955005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.955035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.955254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.955284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.955403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.955433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.955563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.955591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.955780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.956082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.956112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.956263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.956294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.956486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.956515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.956782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.956814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.957010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.957039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.957163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.957192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.957343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.957374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.957558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.957784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.957814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.958017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.599 [2024-07-15 11:39:47.958046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.599 qpair failed and we were unable to recover it. 00:29:04.599 [2024-07-15 11:39:47.958261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.958291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.958466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.958495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.958632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.958661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.958847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.958876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.959064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.959093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.959290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.959320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.959509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.959759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.959788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.959989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.960019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.960291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.960321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.960541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.960571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.960794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.960833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.960972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.961001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.961277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.961310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.961511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.961541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.961737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.961766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.961898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.962198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.962237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.962512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.962541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.962694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.962723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.962919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.962948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.963158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.963187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.963398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.963429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.963680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.963709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.963839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.963873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.964124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.964153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.964368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.964398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.964583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.964612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.964749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.964778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.964904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.964933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.965075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.965104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.965355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.965386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.965571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.965600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.965807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.965837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.965971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.965999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.966184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.600 [2024-07-15 11:39:47.966212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.600 qpair failed and we were unable to recover it. 00:29:04.600 [2024-07-15 11:39:47.966425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.966456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.966709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.966738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.966948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.966977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.967181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.967210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.967441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.967471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.967604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.967823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.967852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.968101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.968130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.968324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.968354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.968554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.968583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.968775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.968804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.969000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.969029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.969171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.969200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.969397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.969427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.969623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.969652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.969885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.969921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.970107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.970136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.970343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.970375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.970653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.970683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.970958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.970987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.971169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.971198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.971379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.971411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.971564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.971593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.971781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.971810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.971959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.971988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.972191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.972222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.972436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.972465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.972667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.972696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.972910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.972954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.973162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.973192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.973403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.973575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.973603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.973866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.973896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.974165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.974193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.974406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.974442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.974654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.974684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.974828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.974858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.975144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.975173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.975409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.975439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.975660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.975690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.601 [2024-07-15 11:39:47.975900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.601 [2024-07-15 11:39:47.975930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.601 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.976066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.976096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.976219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.976261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.976449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.976479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.976698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.976727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.976937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.976966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.977101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.977129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.977252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.977283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.977555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.977585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.977835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.977865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.978124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.978153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.978293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.978323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.978525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.978555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.978853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.978882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.979069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.979099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.979217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.979258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.979569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.979599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.979752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.979781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.979932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.979962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.980209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.980245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.980505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.980534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.980716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.980746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.980886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.980916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.981116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.981146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.981423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.981454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.981708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.981738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.981991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.982020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.982272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.982302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.982487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.982517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.982727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.982760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.982965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.982996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.983237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.983268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.983470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.983500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.983615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.983644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.983834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.983862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.984012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.984042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.984171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.984200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.984395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.984425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.984644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.984672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.984789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.984820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.985027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.985056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.985181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.602 [2024-07-15 11:39:47.985210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.602 qpair failed and we were unable to recover it. 00:29:04.602 [2024-07-15 11:39:47.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.985504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.985650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.985679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.985822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.985852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.986128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.986158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.986295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.986326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.986529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.986559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.986698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.986726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.986916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.986947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.987132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.987162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.987361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.987391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.987589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.987618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.987891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.987921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.988116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.988145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.988268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.988298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.988441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.988470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.988720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.988750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.988943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.989166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.989195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.989336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.989369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.989576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.989606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.989732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.989762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.989892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.989922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.990126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.990156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.990297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.990328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.990537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.990567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.990763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.990795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.990931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.990961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.991095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.991127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.991416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.991447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.991637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.991671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.991843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.991873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.992078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.992108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.992334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.992365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.603 qpair failed and we were unable to recover it. 00:29:04.603 [2024-07-15 11:39:47.992509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.603 [2024-07-15 11:39:47.992538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.992722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.992752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.992943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.992973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.993179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.993209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.993473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.993504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.993700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.993730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.993924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.993954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.994202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.994240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.994395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.994425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.994625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.994656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.994778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.994808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.995077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.995107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.995266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.995298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.995498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.995528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.995780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.995810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.995953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.995982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.996180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.996209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.996338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.996368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.996483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.996512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.996715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.996745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.996869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.996898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.997152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.997186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.997403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.997433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.997634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.997665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.997889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.997919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.998061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.998312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.998343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.998549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.998578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.998708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.998738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.998876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.998905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.999180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.999210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.999381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.999412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.999549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.999579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.999780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.999810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:47.999953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:47.999988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.000245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.000276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.000587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.000617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.000817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.000846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.001040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.001070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.001201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.001241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.001438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.001468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.604 qpair failed and we were unable to recover it. 00:29:04.604 [2024-07-15 11:39:48.001668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.604 [2024-07-15 11:39:48.001697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.001970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.001999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.002198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.002238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.002491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.002520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.002658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.002687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.002886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.002915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.003115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.003144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.003332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.003363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.003549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.003578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.003680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.003709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.003989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.004019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.004223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.004261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.004393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.004422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.004612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.004641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.004835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.004864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.005000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.005030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.005160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.005190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.005394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.005424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.005549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.005578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.005834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.005863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.006078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.006112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.006400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.006430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.006549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.006579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.006797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.006827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.007020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.007050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.007268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.007299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.007517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.007546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.007727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.007757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.007878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.007908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.008094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.008124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.008432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.008463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.008616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.008644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.008893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.008924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.009146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.009176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.009315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.009346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.009542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.009572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.009870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.009899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.010101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.010131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.010382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.010412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.010663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.010693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.010888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.010918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.605 qpair failed and we were unable to recover it. 00:29:04.605 [2024-07-15 11:39:48.011119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.605 [2024-07-15 11:39:48.011148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.011360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.011390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.011540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.011843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.011872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.011989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.012017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.012218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.012257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.012459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.012489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.012750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.012781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.012995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.013024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.013279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.013309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.013584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.013614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.013799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.013829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.014024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.014052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.014273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.014304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.014458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.014488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.014630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.014660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.014847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.014876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.015073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.015103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.015356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.015387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.015638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.015672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.015848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.015877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.016147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.016177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.016326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.016355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.016526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.016555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.016786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.016814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.017080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.017109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.017253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.017283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.017511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.017540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.017679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.017708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.017832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.017861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.018111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.018139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.018325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.018355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.018510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.018540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.018738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.018768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.018907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.018936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.019067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.019096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.019284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.019315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.019501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.019530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.019664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.019693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.019876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.019905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.020095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.606 [2024-07-15 11:39:48.020126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.606 qpair failed and we were unable to recover it. 00:29:04.606 [2024-07-15 11:39:48.020317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.020347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.020489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.020519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.020704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.020733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.020868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.020897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.021065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.021094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.021241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.021271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.021441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.021470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.021670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.021700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.021920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.021949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.022157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.022186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.022394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.022424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.022609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.022638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.022790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.022820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.022966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.022996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.023208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.023270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.023476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.023505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.023635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.023664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.023781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.024067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.024101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.024358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.024389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.024514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.024543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.024724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.024753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.024901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.024930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.025180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.025209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.025350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.025379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.025598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.025627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.025850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.025879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.025995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.026025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.026210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.026244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.026467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.026497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.026651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.026681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.026865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.026893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.027035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.027067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.027206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.027242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.607 [2024-07-15 11:39:48.027400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.607 [2024-07-15 11:39:48.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.607 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.027643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.027672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.027784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.027814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.028015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.028044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.028166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.028195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.028386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.028416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.028637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.029153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.029183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.029322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.029351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.029644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.029674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.029945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.029975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.030125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.030155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.030391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.030423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.030616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.030645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.030932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.030964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.031237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.031267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.031568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.031598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.031769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.031797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.032953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.033162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.033191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.033385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.033415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.033638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.033904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.033933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.034048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.034077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.034199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.034237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.034495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.034523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.034643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.034850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.034882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.035031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.035062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.035260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.035290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.035492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.035521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.035658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.035687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.035810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.035841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.036023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.036052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.036232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.036262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.608 [2024-07-15 11:39:48.036490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.608 [2024-07-15 11:39:48.036519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.608 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.036796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.036824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.036967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.036997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.037203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.037240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.037368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.037397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.037536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.037565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.037773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.037802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.037940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.037969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.038107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.038136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.038310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.038340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.038565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.038595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.038749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.038778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.038896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.038925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.039247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.039279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.039408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.039438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.039636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.039665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.039796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.039825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.039959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.039989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.040119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.040148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.040345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.040375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.040527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.040557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.040704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.040733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.040868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.041096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.041132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.041330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.041581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.041610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.041867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.041896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.042023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.042052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.042254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.042284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.042473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.042502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.042632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.042661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.042859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.042888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.043016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.043240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.043270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.043412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.043441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.043626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.043655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.043791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.043820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.044024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.044054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.044252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.044282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.044468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.044498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.609 [2024-07-15 11:39:48.044694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.609 [2024-07-15 11:39:48.044723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.609 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.044990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.045020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.045160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.045189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.045398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.045428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.045620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.045649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.045836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.045864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.045984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.046132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.046315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.046473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.046720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.046877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.046906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.047089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.047118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.047240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.047270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.047463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.047492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.047640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.047669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.047855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.047884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.048028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.048056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.048174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.048203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.048483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.048512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.048632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.048661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.048843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.048872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.049071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.049100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.049296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.049331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.049459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.049752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.049781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.049911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.049941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.050069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.050097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.050238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.050268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.050474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.050503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.050653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.050682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.050897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.050926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.051122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.051150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.051289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.051319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.051450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.051479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.051614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.051643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.610 [2024-07-15 11:39:48.051861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.610 [2024-07-15 11:39:48.051890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.610 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.052049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.052078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.052263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.052294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.052407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.052436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.052623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.052652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.052856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.052884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.053031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.053060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.053337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.053367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.053664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.053693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.053901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.053931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.054062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.054091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.054304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.054333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.054472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.054501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.054696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.054726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.054997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.055026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.055259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.055290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.055561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.055590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.055729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.055758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.055876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.055905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.056105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.056134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.056285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.056315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.056573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.056602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.056720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.056749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.056900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.056929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.057111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.057140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.057322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.057352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.057479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.057508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.057761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.057795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.058066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.058095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.058380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.058409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.058551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.058580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.058752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.058781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:04.611 [2024-07-15 11:39:48.058967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.058996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.059199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.059234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.059430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.059460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:04.611 [2024-07-15 11:39:48.059597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.059626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.611 [2024-07-15 11:39:48.059824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.059853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.059964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.059994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.060191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.060219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.611 [2024-07-15 11:39:48.060524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.611 [2024-07-15 11:39:48.060555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.611 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.060740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.060951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.060981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.061179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.061471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.061500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.061681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.061904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.061933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.062180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.062208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.062415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.062446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.062651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.062679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.062878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.062908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.063115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.063145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.063288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.063318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.063539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.063580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.063839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.063869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.064064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.064093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.064288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.064319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.064466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.064496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.064696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.064725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.064926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.064955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.065136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.065165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.065386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.065417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.065605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.065635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.065831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.065860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.065985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.066016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.066192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.066222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.066362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.066397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.066528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.066557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.612 [2024-07-15 11:39:48.066745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.612 [2024-07-15 11:39:48.066775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.612 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.066968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.066999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.067188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.067218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.067451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.067481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.067683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.067713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.067921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.067950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.068944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.068973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.069122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.069153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.069282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.069313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.069446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.069476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.069726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.069755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.069881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.069910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.070031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.070060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.070276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.070306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.070514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.070545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.070745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.070774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.070898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.070928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.071111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.071140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.071357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.071388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.071594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.071623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.071830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.071873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.072062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.072092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.072198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.072369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.072398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.072567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.072596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.072864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.072894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.073093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.073122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.073400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.073431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.073640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.073670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.073802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.073832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.074902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.074930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.613 [2024-07-15 11:39:48.075119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.613 [2024-07-15 11:39:48.075148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.613 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.075344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.075375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.075492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.075522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.075645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.075675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.075891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.075921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.076107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.076137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.076459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.076489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.076758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.076787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.076928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.076958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.077150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.077378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.077409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.077664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.077700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.077849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.077881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.078021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.078052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.078176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.078205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.078480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.078510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.078762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.078791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.079895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.079925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.080057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.080085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.080283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.080313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.080444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.080472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.080602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.080631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.080859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.081155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.081183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.081330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.081483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.081512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.081697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.081726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.081911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.081940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.082078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.082106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.082254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.082284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.082429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.082458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.082659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.614 [2024-07-15 11:39:48.082689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.614 qpair failed and we were unable to recover it. 00:29:04.614 [2024-07-15 11:39:48.082891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.082920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.083059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.083093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.083297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.083327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.083518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.083547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.083687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.083716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.083857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.083887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.084087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.084117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.084240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.084270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.084411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.084441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.084624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.084654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.084903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.084933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.085095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.085256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.085488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.085696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.085986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.086015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.086146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.086175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.086440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.086469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.086720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.086749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.086881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.086910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.087943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.087971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.088100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.088129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.088340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.088371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.088500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.088529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.088746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.088775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.088904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.088933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.089059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.089090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.089272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.089303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.089507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.089536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.089725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.089754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.090045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.090075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.090220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.090256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.090477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.090508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.090637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.615 [2024-07-15 11:39:48.090666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.615 qpair failed and we were unable to recover it. 00:29:04.615 [2024-07-15 11:39:48.090788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.090817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6258000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.090972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6250000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.091158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.091391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.091566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.091787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.091940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.091968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.092100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.092128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.092355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.092385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.092583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.092613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.092742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.092770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.092902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.092932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.093123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.093152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.093342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.093372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.093526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.093562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.616 [2024-07-15 11:39:48.093749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.093779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.093910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.093939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.616 [2024-07-15 11:39:48.094052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.094081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.094197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.094234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.094333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.616 [2024-07-15 11:39:48.094364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.094500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.094529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.616 [2024-07-15 11:39:48.094727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.094758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.094895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.094924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.095120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.095150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.095271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.095300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.095551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.095580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.095707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.095736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.616 [2024-07-15 11:39:48.095874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.616 [2024-07-15 11:39:48.095903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.616 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.096089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.096118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.096266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.096295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.096421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.096450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.096700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.096729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.096851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.096880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.097070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.097099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.097299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.097329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.097463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.097492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.097683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.097712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.097849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.097878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.098035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.098276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.098492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.098655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.098815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.098987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.099852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.099971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.100000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.100254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.100283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.100486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.100515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.100703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.100732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.100939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.100969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.101086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.101115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.101317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.101346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.101467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.101496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.101623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.101652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.101837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.101866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.102073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.102102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.102290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.102588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.102617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.102896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.102925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.103049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.103259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.103289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.103471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.103500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.103693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.103723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.617 [2024-07-15 11:39:48.103873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.617 qpair failed and we were unable to recover it. 00:29:04.617 [2024-07-15 11:39:48.104001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.104031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.104250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.104281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.104402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.104431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.104621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.104650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.104924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.104954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.105153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.105184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.105389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.105418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.105543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.105573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.105763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.105792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.105936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.105965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.106157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.106187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.106331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.106368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.106486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.106516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.106720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.106749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.106937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.106968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.107159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.107190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.107407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.107439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.107716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.107746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.107965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.107997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.108196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.108253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.108407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.108438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.108621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.108653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.108772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.108802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.108926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.108956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.109157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.109186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.109408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.109440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.109654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.109683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.109810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.109839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.109962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.109992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.110242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.110547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.110576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.110828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.110857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.111129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.111158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 Malloc0 00:29:04.618 [2024-07-15 11:39:48.111376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.111408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.111558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.111587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.111713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.111874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.111905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.112039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.112067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:04.618 [2024-07-15 11:39:48.112282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.112330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.618 [2024-07-15 11:39:48.112522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.618 [2024-07-15 11:39:48.112552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.618 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.618 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.112713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.112757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.619 [2024-07-15 11:39:48.112954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.112985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.113205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.113243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.113380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.113409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.113540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.113571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.113702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.113733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.113860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.113889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.114942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.114971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.115219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.115257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.115468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.115498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.115779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.115808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.116042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.116071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.116275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.116305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.116442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.116472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.116755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.116785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.116910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.116938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.117091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.117121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.117322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.117352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.117542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.117576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.117793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.117822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.118093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.118123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.118335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.118366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.118559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.118588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.118789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.118818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.119007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.119037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.119067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.619 [2024-07-15 11:39:48.119245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.119275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.119413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.119442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.119665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.119694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.119885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.119914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.120100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.120129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.120326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.120356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.120496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.120525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.120675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.120705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.120886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.120915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.121060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.121088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.121307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.121337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.619 qpair failed and we were unable to recover it. 00:29:04.619 [2024-07-15 11:39:48.121528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.619 [2024-07-15 11:39:48.121557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.121759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.121982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.122011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.122196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.122233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.122426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.122456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.122672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.122701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.122926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.122955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.123102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.123131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.123305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.123335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.123472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.123506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.123708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.123737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.123919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.123949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.124141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.124171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.124340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.124370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.124644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.124674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.620 [2024-07-15 11:39:48.124811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.124840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.125037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.620 [2024-07-15 11:39:48.125208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.125246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.620 [2024-07-15 11:39:48.125450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.125478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.125685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.620 [2024-07-15 11:39:48.125715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.125834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.125864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.126077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.126108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.126245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.126512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.126541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.126737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.126767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.126992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.127022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.127137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.127166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.127356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.127386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.127656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.127687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.127875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.127904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.128152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.128182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.128312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.128343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.128487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.128516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.128719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.128748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.128996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.129025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.129262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.129293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.129432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.129461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.129665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.129695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.129883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.129911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.130093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.130123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.130269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.130299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.130421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.130450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.130578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.620 [2024-07-15 11:39:48.130608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.620 qpair failed and we were unable to recover it. 00:29:04.620 [2024-07-15 11:39:48.130731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.130760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.130957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.130985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.131107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.131136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.131390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.131419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.131556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.131585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2408ed0 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.131884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.131922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.132060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.132089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.132288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.132320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.132529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.132558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.132676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.132902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.132931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.133122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.133151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.133337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.133367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.133641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.133670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.133900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.133928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.134051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.134080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.134305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.134337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.134531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.134560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.134691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.134726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.134949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.134977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.135175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.135205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.135424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.135453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.135654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.135683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.135898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.135928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.136156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.136186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.136340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.136371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.621 [2024-07-15 11:39:48.136577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.136607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.136908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.621 [2024-07-15 11:39:48.136939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.137123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.137152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.621 [2024-07-15 11:39:48.137288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.137319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 [2024-07-15 11:39:48.137593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.137623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.137845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.137873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.138104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.138133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.138330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.138359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.138567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.138598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.621 [2024-07-15 11:39:48.138797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.621 [2024-07-15 11:39:48.138827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.621 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.139022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.139051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.139243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.139463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.139496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.139717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.139746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.139938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.139967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.140237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.140268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.140493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.140522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.140773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.140807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.141009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.141039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.141307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.141338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.141540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.141569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.141694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.141723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.141999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.142029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.142287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.142317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.142570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.142599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.142881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.142910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.143189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.143218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.143502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.143532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.143734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.143764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.144023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.144271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.144302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.144557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.144868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.144897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.622 [2024-07-15 11:39:48.145149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.145179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.622 [2024-07-15 11:39:48.145396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.145428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 [2024-07-15 11:39:48.145625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.145654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.145905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.145934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.146212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.146263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.146545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.146574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.146797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.147055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.147085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.147203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.147243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.147434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.147755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.147785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.147972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.622 [2024-07-15 11:39:48.148002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6260000b90 with addr=10.0.0.2, port=4420 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.622 [2024-07-15 11:39:48.148089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.622 [2024-07-15 11:39:48.149665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.622 [2024-07-15 11:39:48.149812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.622 [2024-07-15 11:39:48.149858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.622 [2024-07-15 11:39:48.149882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.622 [2024-07-15 11:39:48.149902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.622 [2024-07-15 11:39:48.149954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.622 qpair failed and we were unable to recover it. 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.884 [2024-07-15 11:39:48.159591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.884 [2024-07-15 11:39:48.159689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.884 [2024-07-15 11:39:48.159724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.884 [2024-07-15 11:39:48.159741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.884 [2024-07-15 11:39:48.159756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.884 [2024-07-15 11:39:48.159792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.884 qpair failed and we were unable to recover it. 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.884 11:39:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 765703 00:29:04.884 [2024-07-15 11:39:48.169588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.884 [2024-07-15 11:39:48.169677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.884 [2024-07-15 11:39:48.169700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.884 [2024-07-15 11:39:48.169711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.884 [2024-07-15 11:39:48.169725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.884 [2024-07-15 11:39:48.169749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.884 qpair failed and we were unable to recover it. 00:29:04.884 [2024-07-15 11:39:48.179549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.884 [2024-07-15 11:39:48.179616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.884 [2024-07-15 11:39:48.179631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.884 [2024-07-15 11:39:48.179639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.884 [2024-07-15 11:39:48.179645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.884 [2024-07-15 11:39:48.179661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.884 qpair failed and we were unable to recover it. 00:29:04.884 [2024-07-15 11:39:48.189583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.189645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.189660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.189667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.189674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.189690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.199612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.199672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.199687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.199695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.199701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.199716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.209638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.209698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.209714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.209722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.209728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.209743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.219653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.219713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.219728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.219736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.219742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.219756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.229686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.229751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.229766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.229774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.229780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.229795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.239697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.239759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.239777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.239783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.239790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.239806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.249723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.249782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.249798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.249806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.249812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.249827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.259717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.259777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.259792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.259803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.259809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.259823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.269803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.269864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.269880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.269888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.269894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.269909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.279800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.279855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.279869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.279878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.279884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.279898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.289853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.289913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.289927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.289935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.289941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.289955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.299877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.299937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.299954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.299961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.299967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.299982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.309919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.309977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.309992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.309999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.310006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.310020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.319876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.319947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.319962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.885 [2024-07-15 11:39:48.319969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.885 [2024-07-15 11:39:48.319976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.885 [2024-07-15 11:39:48.319990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.885 qpair failed and we were unable to recover it. 00:29:04.885 [2024-07-15 11:39:48.329973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.885 [2024-07-15 11:39:48.330030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.885 [2024-07-15 11:39:48.330045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.330052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.330058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.330073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.339995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.340053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.340068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.340075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.340081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.340096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.350024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.350087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.350105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.350112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.350118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.350133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.360058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.360117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.360131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.360140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.360146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.360161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.370095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.370149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.370164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.370172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.370179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.370193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.380194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.380267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.380283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.380290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.380297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.380312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.390185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.390250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.390267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.390276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.390282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.390301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.400251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.400316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.400331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.400339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.400345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.400360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.410255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.410310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.410324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.410331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.410338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.410354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.420242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.420307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.420321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.420329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.420335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.420350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.430285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.430343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.430358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.430366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.430373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.430389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.440342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.440416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.440437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.440444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.440450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.440465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.450364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.450452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.450468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.450475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.450481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.450496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.460377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.460436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.460451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.460458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.886 [2024-07-15 11:39:48.460465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.886 [2024-07-15 11:39:48.460482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.886 qpair failed and we were unable to recover it. 00:29:04.886 [2024-07-15 11:39:48.470378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.886 [2024-07-15 11:39:48.470439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.886 [2024-07-15 11:39:48.470454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.886 [2024-07-15 11:39:48.470461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.887 [2024-07-15 11:39:48.470468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:04.887 [2024-07-15 11:39:48.470483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.887 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.480361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.480419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.480434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.480441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.480448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.480465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.490402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.490460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.490474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.490481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.490488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.490502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.500470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.500533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.500547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.500555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.500561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.500576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.510478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.510541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.510556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.510563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.510570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.510586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.520464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.520517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.520531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.520538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.520545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.520559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.530485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.530547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.530562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.530570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.530576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.530590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.540595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.540654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.540668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.540675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.540681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.540696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.550658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.550718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.147 [2024-07-15 11:39:48.550732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.147 [2024-07-15 11:39:48.550740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.147 [2024-07-15 11:39:48.550746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.147 [2024-07-15 11:39:48.550762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.147 qpair failed and we were unable to recover it. 00:29:05.147 [2024-07-15 11:39:48.560650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.147 [2024-07-15 11:39:48.560706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.560721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.560728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.560735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.560750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.570615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.570673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.570688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.570695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.570705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.570720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.580681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.580740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.580754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.580761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.580767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.580782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.590725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.590787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.590802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.590810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.590815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.590830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.600717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.600778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.600793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.600801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.600807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.600822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.610790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.610849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.610864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.610872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.610878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.610893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.620779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.620839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.620854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.620862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.620869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.620883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.630856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.630916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.630931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.630939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.630945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.630960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.640831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.640917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.640932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.640939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.640945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.640961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.650837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.650893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.650908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.650915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.650921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.650936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.660912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.660971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.660985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.660996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.661003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.661017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.670892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.670951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.670967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.670975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.670981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.670996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.681011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.681070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.681086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.681094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.681100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.681115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.691013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.691072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.691086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.691094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.148 [2024-07-15 11:39:48.691100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.148 [2024-07-15 11:39:48.691115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.148 qpair failed and we were unable to recover it. 00:29:05.148 [2024-07-15 11:39:48.701049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.148 [2024-07-15 11:39:48.701110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.148 [2024-07-15 11:39:48.701124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.148 [2024-07-15 11:39:48.701132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.149 [2024-07-15 11:39:48.701138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.149 [2024-07-15 11:39:48.701153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-07-15 11:39:48.711060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.149 [2024-07-15 11:39:48.711120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.149 [2024-07-15 11:39:48.711135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.149 [2024-07-15 11:39:48.711142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.149 [2024-07-15 11:39:48.711149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.149 [2024-07-15 11:39:48.711164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-07-15 11:39:48.721123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.149 [2024-07-15 11:39:48.721183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.149 [2024-07-15 11:39:48.721198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.149 [2024-07-15 11:39:48.721205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.149 [2024-07-15 11:39:48.721211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.149 [2024-07-15 11:39:48.721230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.149 [2024-07-15 11:39:48.731116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.149 [2024-07-15 11:39:48.731178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.149 [2024-07-15 11:39:48.731193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.149 [2024-07-15 11:39:48.731200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.149 [2024-07-15 11:39:48.731207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.149 [2024-07-15 11:39:48.731221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.149 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.741164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.741234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.741250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.741257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.741263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.741279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.751176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.751242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.751258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.751269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.751275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.751290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.761215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.761275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.761290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.761297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.761304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.761318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.771251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.771313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.771327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.771335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.771341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.771355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.781246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.781313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.781327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.781334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.781340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.781356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.791294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.791352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.791366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.791374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.791380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.791395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.801350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.801410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.801425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.801432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.801438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.801452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.811380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.811438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.811453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.811461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.811467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.811484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.821361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.821429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.821444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.821451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.821457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.821471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.831428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.831487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.831501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.831508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.831515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.831529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.841461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.410 [2024-07-15 11:39:48.841517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.410 [2024-07-15 11:39:48.841536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.410 [2024-07-15 11:39:48.841543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.410 [2024-07-15 11:39:48.841549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.410 [2024-07-15 11:39:48.841563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.410 qpair failed and we were unable to recover it. 00:29:05.410 [2024-07-15 11:39:48.851476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.851558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.851573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.851580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.851586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.851600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.861503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.861560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.861575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.861582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.861588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.861603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.871531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.871592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.871607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.871614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.871620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.871635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.881502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.881564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.881580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.881587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.881595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.881612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.891587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.891652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.891667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.891674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.891681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.891695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.901663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.901748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.901763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.901770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.901777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.901792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.911641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.911704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.911719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.911727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.911734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.911748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.921678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.921733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.921748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.921755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.921761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.921776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.931700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.931759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.931777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.931784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.931790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.931804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.941728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.941786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.411 [2024-07-15 11:39:48.941801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.411 [2024-07-15 11:39:48.941808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.411 [2024-07-15 11:39:48.941814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.411 [2024-07-15 11:39:48.941829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.411 qpair failed and we were unable to recover it. 00:29:05.411 [2024-07-15 11:39:48.951754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.411 [2024-07-15 11:39:48.951815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.412 [2024-07-15 11:39:48.951831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.412 [2024-07-15 11:39:48.951838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.412 [2024-07-15 11:39:48.951845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.412 [2024-07-15 11:39:48.951859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.412 qpair failed and we were unable to recover it. 00:29:05.412 [2024-07-15 11:39:48.961780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.412 [2024-07-15 11:39:48.961837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.412 [2024-07-15 11:39:48.961854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.412 [2024-07-15 11:39:48.961861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.412 [2024-07-15 11:39:48.961867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.412 [2024-07-15 11:39:48.961882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.412 qpair failed and we were unable to recover it. 00:29:05.412 [2024-07-15 11:39:48.971865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.412 [2024-07-15 11:39:48.971924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.412 [2024-07-15 11:39:48.971938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.412 [2024-07-15 11:39:48.971946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.412 [2024-07-15 11:39:48.971956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.412 [2024-07-15 11:39:48.971971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.412 qpair failed and we were unable to recover it. 00:29:05.412 [2024-07-15 11:39:48.981875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.412 [2024-07-15 11:39:48.981934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.412 [2024-07-15 11:39:48.981949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.412 [2024-07-15 11:39:48.981957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.412 [2024-07-15 11:39:48.981963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.412 [2024-07-15 11:39:48.981977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.412 qpair failed and we were unable to recover it. 00:29:05.412 [2024-07-15 11:39:48.991860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.412 [2024-07-15 11:39:48.991923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.412 [2024-07-15 11:39:48.991938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.412 [2024-07-15 11:39:48.991946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.412 [2024-07-15 11:39:48.991952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.412 [2024-07-15 11:39:48.991966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.412 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.001893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.001955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.001970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.001978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.001985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.002001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.011927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.011985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.012000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.012008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.012014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.012028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.021978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.022080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.022095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.022102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.022109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.022124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.031969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.032031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.032046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.032053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.032060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.032074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.042007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.042064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.042079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.042087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.042093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.042107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.052022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.052085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.052100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.052109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.672 [2024-07-15 11:39:49.052115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.672 [2024-07-15 11:39:49.052130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.672 qpair failed and we were unable to recover it. 00:29:05.672 [2024-07-15 11:39:49.062070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.672 [2024-07-15 11:39:49.062127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.672 [2024-07-15 11:39:49.062141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.672 [2024-07-15 11:39:49.062152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.062158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.062172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.072088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.072147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.072162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.072169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.072175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.072190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.082157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.082215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.082233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.082240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.082246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.082261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.092195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.092256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.092271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.092278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.092285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.092300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.102189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.102253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.102269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.102276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.102283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.102297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.112210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.112275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.112289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.112297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.112303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.112318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.122232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.122287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.122302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.122310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.122317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.122331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.132265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.132319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.132334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.132342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.132348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.132362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.142312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.142370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.142385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.142392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.142398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.142413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.152320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.152388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.152402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.152413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.152419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.152433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.162371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.162437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.162452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.162460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.162466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.162480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.172375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.172435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.172450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.172457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.172463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.172478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.182430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.182490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.182505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.182512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.182518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.182532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.192385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.192448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.192465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.192471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.192478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.192492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.673 [2024-07-15 11:39:49.202470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.673 [2024-07-15 11:39:49.202526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.673 [2024-07-15 11:39:49.202541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.673 [2024-07-15 11:39:49.202548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.673 [2024-07-15 11:39:49.202555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.673 [2024-07-15 11:39:49.202569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.673 qpair failed and we were unable to recover it. 00:29:05.674 [2024-07-15 11:39:49.212493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.674 [2024-07-15 11:39:49.212550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.674 [2024-07-15 11:39:49.212566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.674 [2024-07-15 11:39:49.212573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.674 [2024-07-15 11:39:49.212579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.674 [2024-07-15 11:39:49.212594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.674 qpair failed and we were unable to recover it. 00:29:05.674 [2024-07-15 11:39:49.222521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.674 [2024-07-15 11:39:49.222577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.674 [2024-07-15 11:39:49.222591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.674 [2024-07-15 11:39:49.222598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.674 [2024-07-15 11:39:49.222604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.674 [2024-07-15 11:39:49.222618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.674 qpair failed and we were unable to recover it. 00:29:05.674 [2024-07-15 11:39:49.232579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.674 [2024-07-15 11:39:49.232676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.674 [2024-07-15 11:39:49.232692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.674 [2024-07-15 11:39:49.232700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.674 [2024-07-15 11:39:49.232706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.674 [2024-07-15 11:39:49.232721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.674 qpair failed and we were unable to recover it. 00:29:05.674 [2024-07-15 11:39:49.242619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.674 [2024-07-15 11:39:49.242679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.674 [2024-07-15 11:39:49.242697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.674 [2024-07-15 11:39:49.242705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.674 [2024-07-15 11:39:49.242711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.674 [2024-07-15 11:39:49.242726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.674 qpair failed and we were unable to recover it. 00:29:05.674 [2024-07-15 11:39:49.252538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.674 [2024-07-15 11:39:49.252596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.674 [2024-07-15 11:39:49.252610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.674 [2024-07-15 11:39:49.252618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.674 [2024-07-15 11:39:49.252624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.674 [2024-07-15 11:39:49.252638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.674 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.262719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.262802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.262819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.262826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.262833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.262849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.272720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.272776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.272791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.272799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.272805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.272820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.282699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.282752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.282766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.282773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.282780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.282798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.292694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.292798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.292813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.292820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.292827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.292842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.302759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.302817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.302832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.302840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.302846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.302860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.312770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.312822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.312837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.312844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.312850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.312865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.322811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.322869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.322883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.322890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.322897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.322911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.332847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.332904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.332923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.332930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.934 [2024-07-15 11:39:49.332936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.934 [2024-07-15 11:39:49.332951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.934 qpair failed and we were unable to recover it. 00:29:05.934 [2024-07-15 11:39:49.342883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.934 [2024-07-15 11:39:49.342940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.934 [2024-07-15 11:39:49.342955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.934 [2024-07-15 11:39:49.342961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.342968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.342983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.352896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.352952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.352966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.352973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.352980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.352995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.362905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.362963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.362978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.362985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.362991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.363006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.372956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.373015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.373030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.373037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.373048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.373064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.382991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.383052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.383067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.383074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.383080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.383095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.393018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.393075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.393090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.393097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.393104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.393119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.403039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.403098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.403112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.403119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.403126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.403140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.413071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.413129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.413144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.413151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.413157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.413171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.423111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.423174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.423189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.423196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.423202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.423216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.433155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.433211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.433228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.433236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.433242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.433257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.443158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.443216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.443234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.443242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.443248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.443262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.453188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.453246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.453260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.453268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.453274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.453288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.463237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.463314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.463330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.463337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.463346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.463361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.473243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.473306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.473320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.473328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.473335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.473349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.935 [2024-07-15 11:39:49.483282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.935 [2024-07-15 11:39:49.483342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.935 [2024-07-15 11:39:49.483357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.935 [2024-07-15 11:39:49.483364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.935 [2024-07-15 11:39:49.483371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.935 [2024-07-15 11:39:49.483385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.935 qpair failed and we were unable to recover it. 00:29:05.936 [2024-07-15 11:39:49.493319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.936 [2024-07-15 11:39:49.493377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.936 [2024-07-15 11:39:49.493392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.936 [2024-07-15 11:39:49.493399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.936 [2024-07-15 11:39:49.493405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.936 [2024-07-15 11:39:49.493420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.936 qpair failed and we were unable to recover it. 00:29:05.936 [2024-07-15 11:39:49.503363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.936 [2024-07-15 11:39:49.503420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.936 [2024-07-15 11:39:49.503435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.936 [2024-07-15 11:39:49.503442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.936 [2024-07-15 11:39:49.503448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.936 [2024-07-15 11:39:49.503462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.936 qpair failed and we were unable to recover it. 00:29:05.936 [2024-07-15 11:39:49.513369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.936 [2024-07-15 11:39:49.513427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.936 [2024-07-15 11:39:49.513442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.936 [2024-07-15 11:39:49.513449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.936 [2024-07-15 11:39:49.513455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.936 [2024-07-15 11:39:49.513470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.936 qpair failed and we were unable to recover it. 00:29:05.936 [2024-07-15 11:39:49.523419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.936 [2024-07-15 11:39:49.523479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.936 [2024-07-15 11:39:49.523494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.936 [2024-07-15 11:39:49.523502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.936 [2024-07-15 11:39:49.523508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:05.936 [2024-07-15 11:39:49.523522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.936 qpair failed and we were unable to recover it. 00:29:06.196 [2024-07-15 11:39:49.533437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.196 [2024-07-15 11:39:49.533491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.196 [2024-07-15 11:39:49.533506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.196 [2024-07-15 11:39:49.533513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.533520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.533535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.543490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.543558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.543573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.543580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.543586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.543601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.553512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.553596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.553611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.553621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.553627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.553642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.563525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.563595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.563609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.563616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.563622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.563638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.573551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.573609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.573624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.573631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.573637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.573652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.583588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.583646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.583661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.583668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.583675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.583690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.593601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.593660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.593675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.593682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.593688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.593703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.603671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.603729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.603743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.603751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.603757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.603772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.613662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.613732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.613746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.613753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.613759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.613774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.623694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.623755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.623769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.623777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.623783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.623797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.633744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.633805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.633820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.633827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.633834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.633848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.643744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.643807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.643825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.643833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.643839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.643853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.653767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.653821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.653836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.653843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.653849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.653864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.663804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.663866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.663880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.663888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.663894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.197 [2024-07-15 11:39:49.663910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.197 qpair failed and we were unable to recover it. 00:29:06.197 [2024-07-15 11:39:49.673798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.197 [2024-07-15 11:39:49.673891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.197 [2024-07-15 11:39:49.673906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.197 [2024-07-15 11:39:49.673914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.197 [2024-07-15 11:39:49.673920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.673935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.683869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.683932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.683946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.683954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.683960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.683977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.693882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.693939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.693953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.693960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.693966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.693981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.703922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.703981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.703996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.704003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.704009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.704024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.713879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.713941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.713956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.713963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.713969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.713984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.723971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.724033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.724047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.724054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.724060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.724075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.734015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.734074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.734092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.734099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.734105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.734120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.744034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.744095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.744110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.744118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.744124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.744140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.754064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.754123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.754137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.754144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.754150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.754165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.764093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.764147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.764162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.764170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.764176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.764191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.774125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.774181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.774196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.774203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.774212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.774230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.198 [2024-07-15 11:39:49.784159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.198 [2024-07-15 11:39:49.784220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.198 [2024-07-15 11:39:49.784242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.198 [2024-07-15 11:39:49.784249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.198 [2024-07-15 11:39:49.784255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.198 [2024-07-15 11:39:49.784270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.198 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.794189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.794248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.794263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.794271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.794278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.794293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.804194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.804262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.804279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.804286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.804293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.804309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.814245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.814301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.814318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.814325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.814332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.814348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.824205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.824271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.824288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.824295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.824302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.824317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.834296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.834360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.834377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.834387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.834397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.834414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.844321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.844383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.844400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.844409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.844417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.844433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.854356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.854420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.854436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.854444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.854452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.854470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.864396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.864457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.864472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.864479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.864493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.864510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.874415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.874473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.874488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.874496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.874502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.874517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.884439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.884492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.884507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.884514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.884521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.884535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.894418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.894478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.894494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.894501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.894508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.894523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.904442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.904501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.461 [2024-07-15 11:39:49.904516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.461 [2024-07-15 11:39:49.904522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.461 [2024-07-15 11:39:49.904529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.461 [2024-07-15 11:39:49.904544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.461 qpair failed and we were unable to recover it. 00:29:06.461 [2024-07-15 11:39:49.914471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.461 [2024-07-15 11:39:49.914536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.914552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.914558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.914564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.914578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.924507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.924560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.924576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.924583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.924590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.924605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.934610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.934668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.934683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.934690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.934696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.934711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.944566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.944625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.944639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.944646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.944653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.944667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.954631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.954688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.954703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.954714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.954721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.954735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.964602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.964662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.964676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.964684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.964690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.964705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.974686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.974740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.974756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.974764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.974771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.974786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.984692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.984767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.984783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.984790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.984797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.984812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:49.994732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:49.994792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:49.994808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:49.994815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:49.994821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:49.994835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:50.004752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:50.004807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:50.004822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:50.004830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:50.004837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:50.004852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:50.014803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:50.014866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:50.014884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:50.014892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:50.014898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:50.014915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:50.024810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:50.024872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:50.024890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:50.024898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:50.024905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:50.024921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:50.034911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:50.034976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:50.034992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:50.035000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:50.035006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:50.035021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.462 [2024-07-15 11:39:50.044936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.462 [2024-07-15 11:39:50.044998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.462 [2024-07-15 11:39:50.045018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.462 [2024-07-15 11:39:50.045025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.462 [2024-07-15 11:39:50.045032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.462 [2024-07-15 11:39:50.045048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.462 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.055012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.055076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.055094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.055102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.723 [2024-07-15 11:39:50.055109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.723 [2024-07-15 11:39:50.055135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.723 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.064940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.065012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.065030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.065038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.723 [2024-07-15 11:39:50.065044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.723 [2024-07-15 11:39:50.065061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.723 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.074963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.075029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.075045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.075052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.723 [2024-07-15 11:39:50.075059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.723 [2024-07-15 11:39:50.075074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.723 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.084989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.085053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.085068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.085075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.723 [2024-07-15 11:39:50.085081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.723 [2024-07-15 11:39:50.085099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.723 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.095087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.095150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.095165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.095172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.723 [2024-07-15 11:39:50.095179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.723 [2024-07-15 11:39:50.095194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.723 qpair failed and we were unable to recover it. 00:29:06.723 [2024-07-15 11:39:50.105031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.723 [2024-07-15 11:39:50.105091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.723 [2024-07-15 11:39:50.105106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.723 [2024-07-15 11:39:50.105113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.105119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.105134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.115104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.115192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.115208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.115215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.115221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.115240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.125133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.125192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.125209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.125217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.125239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.125255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.135184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.135254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.135279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.135286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.135293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.135309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.145217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.145280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.145296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.145303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.145310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.145325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.155274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.155343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.155358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.155365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.155371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.155385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.165292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.165352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.165367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.165373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.165380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.165395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.175294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.175357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.175372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.175379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.175385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.175402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.185329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.185400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.185416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.185422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.185429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.185443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.195358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.195420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.195436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.195443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.195450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.195465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.205332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.205386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.205401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.205407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.205414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.205428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.215414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.215471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.215485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.215493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.215500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.215514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.225384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.225448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.225465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.225472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.225480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.225495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.235507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.235572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.724 [2024-07-15 11:39:50.235587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.724 [2024-07-15 11:39:50.235595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.724 [2024-07-15 11:39:50.235601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.724 [2024-07-15 11:39:50.235616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.724 qpair failed and we were unable to recover it. 00:29:06.724 [2024-07-15 11:39:50.245513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.724 [2024-07-15 11:39:50.245571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.245587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.245594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.245602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.245617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.255477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.255531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.255547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.255554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.255562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.255578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.265567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.265632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.265648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.265655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.265665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.265681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.275623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.275687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.275703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.275711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.275718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.275733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.285629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.285694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.285710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.285718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.285725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.285740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.295645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.295701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.295718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.295725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.295732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.295748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.725 [2024-07-15 11:39:50.305739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.725 [2024-07-15 11:39:50.305803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.725 [2024-07-15 11:39:50.305820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.725 [2024-07-15 11:39:50.305827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.725 [2024-07-15 11:39:50.305835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.725 [2024-07-15 11:39:50.305851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.725 qpair failed and we were unable to recover it. 00:29:06.983 [2024-07-15 11:39:50.315716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.983 [2024-07-15 11:39:50.315780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.983 [2024-07-15 11:39:50.315797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.315805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.315813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.315829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.325761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.325832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.325848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.325856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.325863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.325879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.335812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.335906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.335921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.335928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.335934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.335950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.345764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.345829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.345844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.345851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.345857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.345871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.355835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.355893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.355908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.355918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.355924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.355939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.365899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.365971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.365985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.365992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.365998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.366013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.375920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.375985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.376000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.376007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.376014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.376028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.386018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.386095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.386110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.386117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.386123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.386138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.395991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.396051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.396067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.396074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.396080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.396094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.406068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.406174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.406189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.406197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.406204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.406218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.416058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.416116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.416131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.416138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.416144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.416159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.426059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.426118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.426133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.426140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.426146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.426161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.436090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.436149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.436163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.436171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.436177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.436192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.446146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.446209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.984 [2024-07-15 11:39:50.446231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.984 [2024-07-15 11:39:50.446239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.984 [2024-07-15 11:39:50.446245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.984 [2024-07-15 11:39:50.446260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.984 qpair failed and we were unable to recover it. 00:29:06.984 [2024-07-15 11:39:50.456146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.984 [2024-07-15 11:39:50.456205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.456219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.456230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.456237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.456252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.466197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.466265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.466281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.466287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.466294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.466308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.476197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.476267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.476282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.476288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.476294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.476309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.486172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.486233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.486248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.486255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.486262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.486277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.496264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.496322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.496337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.496345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.496351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.496365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.506305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.506365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.506379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.506386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.506392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.506407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.516334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.516427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.516442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.516449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.516455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.516471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.526345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.526403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.526418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.526426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.526432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.526446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.536436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.536497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.536515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.536522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.536528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.536542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.546416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.985 [2024-07-15 11:39:50.546475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.985 [2024-07-15 11:39:50.546490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.985 [2024-07-15 11:39:50.546497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.985 [2024-07-15 11:39:50.546504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.985 [2024-07-15 11:39:50.546518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.985 qpair failed and we were unable to recover it. 00:29:06.985 [2024-07-15 11:39:50.556491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.986 [2024-07-15 11:39:50.556550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.986 [2024-07-15 11:39:50.556565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.986 [2024-07-15 11:39:50.556573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.986 [2024-07-15 11:39:50.556580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.986 [2024-07-15 11:39:50.556595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.986 qpair failed and we were unable to recover it. 00:29:06.986 [2024-07-15 11:39:50.566474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.986 [2024-07-15 11:39:50.566533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.986 [2024-07-15 11:39:50.566547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.986 [2024-07-15 11:39:50.566555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.986 [2024-07-15 11:39:50.566562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:06.986 [2024-07-15 11:39:50.566576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:06.986 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.576520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.576586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.576602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.576609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.576615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.576635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.586539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.586609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.586624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.586631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.586637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.586652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.596555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.596617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.596633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.596641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.596648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.596662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.606582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.606643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.606658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.606665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.606672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.606686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.616627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.616684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.616698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.616705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.616711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.616725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.626706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.626816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.626835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.626843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.626849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.626864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.636672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.636732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.636747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.636754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.636760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.636775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.646704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.646767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.646781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.646788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.646794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.646808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.656738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.656794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.656809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.656816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.656822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.656837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.666773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.666834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.666849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.666856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.666940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.666955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.676769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.676831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.676846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.676854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.676860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.676874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.686817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.686875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.686890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.245 [2024-07-15 11:39:50.686897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.245 [2024-07-15 11:39:50.686904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.245 [2024-07-15 11:39:50.686918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.245 qpair failed and we were unable to recover it. 00:29:07.245 [2024-07-15 11:39:50.696843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.245 [2024-07-15 11:39:50.696896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.245 [2024-07-15 11:39:50.696911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.696919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.696925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.696940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.706875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.706935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.706950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.706957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.706963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.706978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.716904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.717006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.717020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.717027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.717034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.717049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.726979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.727040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.727055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.727063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.727069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.727084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.736968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.737025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.737040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.737048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.737054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.737069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.747001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.747061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.747076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.747084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.747090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.747105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.757017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.757098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.757113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.757124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.757130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.757146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.767106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.767166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.767182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.767189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.767195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.767210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.777072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.777127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.777142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.777149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.777156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.777170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.787109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.787168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.787183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.787191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.787198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.787213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.797167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.797228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.797245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.797252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.797259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.797275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.807145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.807199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.807214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.807222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.807232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.807246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.817186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.817249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.817265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.817272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.817278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.817294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.246 [2024-07-15 11:39:50.827229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.246 [2024-07-15 11:39:50.827293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.246 [2024-07-15 11:39:50.827308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.246 [2024-07-15 11:39:50.827316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.246 [2024-07-15 11:39:50.827322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.246 [2024-07-15 11:39:50.827337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.246 qpair failed and we were unable to recover it. 00:29:07.504 [2024-07-15 11:39:50.837252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.504 [2024-07-15 11:39:50.837318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.504 [2024-07-15 11:39:50.837333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.504 [2024-07-15 11:39:50.837340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.504 [2024-07-15 11:39:50.837347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.504 [2024-07-15 11:39:50.837362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-07-15 11:39:50.847286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.504 [2024-07-15 11:39:50.847352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.504 [2024-07-15 11:39:50.847367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.504 [2024-07-15 11:39:50.847377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.504 [2024-07-15 11:39:50.847384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.504 [2024-07-15 11:39:50.847398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-07-15 11:39:50.857317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.504 [2024-07-15 11:39:50.857375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.504 [2024-07-15 11:39:50.857389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.504 [2024-07-15 11:39:50.857397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.504 [2024-07-15 11:39:50.857403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.504 [2024-07-15 11:39:50.857419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-07-15 11:39:50.867398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.504 [2024-07-15 11:39:50.867486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.504 [2024-07-15 11:39:50.867500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.504 [2024-07-15 11:39:50.867507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.504 [2024-07-15 11:39:50.867514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.504 [2024-07-15 11:39:50.867530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-07-15 11:39:50.877347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.877408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.877422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.877430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.877436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.877451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.887394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.887454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.887469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.887477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.887483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.887497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.897433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.897492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.897507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.897514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.897521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.897535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.907464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.907524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.907539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.907546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.907553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.907570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.917485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.917544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.917559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.917566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.917573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.917587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.927500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.927558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.927573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.927581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.927588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.927603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.937510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.937573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.937591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.937598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.937604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.937619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.947571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.947634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.947649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.947656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.947661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.947675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.957586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.957645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.957659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.957666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.957672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.957687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.967618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.967674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.967689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.967696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.967702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.967716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.977645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.977702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.977716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.977724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.977730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.977748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.987682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.987742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.987757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.987764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.987771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.987785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:50.997710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:50.997765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:50.997779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:50.997787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:50.997794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:50.997808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:51.007734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.505 [2024-07-15 11:39:51.007789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.505 [2024-07-15 11:39:51.007804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.505 [2024-07-15 11:39:51.007812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.505 [2024-07-15 11:39:51.007819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.505 [2024-07-15 11:39:51.007832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-07-15 11:39:51.017767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.017841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.017856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.017863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.017869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.017884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.027807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.027865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.027883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.027891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.027897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.027911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.037767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.037826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.037842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.037849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.037855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.037870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.047851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.047913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.047927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.047935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.047941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.047956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.057881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.057938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.057953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.057961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.057967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.057982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.067916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.067975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.067990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.067998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.068008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.068023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.077885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.077948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.077964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.077971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.077978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.077993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-07-15 11:39:51.088033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.506 [2024-07-15 11:39:51.088092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.506 [2024-07-15 11:39:51.088107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.506 [2024-07-15 11:39:51.088114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.506 [2024-07-15 11:39:51.088122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.506 [2024-07-15 11:39:51.088137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.098018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.098087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.098102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.098110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.098116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.098131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.108039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.108108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.108123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.108131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.108137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.108152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.118046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.118115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.118131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.118138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.118144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.118159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.128082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.128142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.128158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.128165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.128172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.128186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.138119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.138177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.138192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.138199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.138206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.138220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.148170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.148274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.148289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.148296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.148303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.148318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.158142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.158207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.158223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.158234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.158243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.158258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.168217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.168282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.168297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.168304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.168310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.168324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.178216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.178276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.178291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.178298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.178304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.178319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.188261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.188321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.188335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.188343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.188349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.188363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.198270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.198331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.198345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.765 [2024-07-15 11:39:51.198353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.765 [2024-07-15 11:39:51.198360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.765 [2024-07-15 11:39:51.198375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.765 qpair failed and we were unable to recover it. 00:29:07.765 [2024-07-15 11:39:51.208297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.765 [2024-07-15 11:39:51.208357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.765 [2024-07-15 11:39:51.208372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.208380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.208386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.208401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.218369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.218425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.218439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.218447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.218453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.218468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.228383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.228451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.228466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.228473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.228479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.228494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.238383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.238442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.238457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.238464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.238471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.238485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.248412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.248473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.248490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.248500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.248507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.248523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.258441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.258501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.258516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.258524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.258530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.258546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.268433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.268491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.268507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.268514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.268521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.268536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.278442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.278503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.278520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.278528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.278534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.278550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.288467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.288530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.288546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.288552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.288559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.288574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.298599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.298657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.298673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.298680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.298686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.298701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.308533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.308593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.308608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.308615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.308621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.308635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.318551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.318616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.318631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.318638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.318645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.318660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.328641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.328702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.328717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.328725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.328731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.328745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.338676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.338734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.338752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.338760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.338766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.766 [2024-07-15 11:39:51.338781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.766 qpair failed and we were unable to recover it. 00:29:07.766 [2024-07-15 11:39:51.348705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.766 [2024-07-15 11:39:51.348763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.766 [2024-07-15 11:39:51.348778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.766 [2024-07-15 11:39:51.348786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.766 [2024-07-15 11:39:51.348793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:07.767 [2024-07-15 11:39:51.348808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.767 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.358672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.358741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.358756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.358763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.358770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.358785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.368694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.368755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.368770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.368777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.368784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.368799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.378737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.378796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.378811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.378818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.378825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.378843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.388770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.388833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.388848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.388855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.388861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.388877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.398830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.398889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.398905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.398912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.398919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.398933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.408853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.408938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.025 [2024-07-15 11:39:51.408955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.025 [2024-07-15 11:39:51.408962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.025 [2024-07-15 11:39:51.408969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.025 [2024-07-15 11:39:51.408984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.025 qpair failed and we were unable to recover it. 00:29:08.025 [2024-07-15 11:39:51.418888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.025 [2024-07-15 11:39:51.418948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.418965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.418972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.418979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.418997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.428926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.428990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.429009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.429016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.429023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.429039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.438879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.438940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.438956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.438963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.438970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.438986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.448915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.449013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.449031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.449039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.449047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.449063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.458974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.459036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.459052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.459060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.459066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.459083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.469036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.469095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.469112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.469120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.469130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.469146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.479029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.479097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.479113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.479121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.479128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.479144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.489112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.489175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.489192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.489199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.489207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.489228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.499153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.499216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.499238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.499247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.499254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.499271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.509212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.509319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.509335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.509344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.509350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.509366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.519182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.519283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.519300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.519307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.519315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.519332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.529203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.529263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.529287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.529295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.529302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.529318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.539234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.539290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.539306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.539314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.539320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.539336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.549273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.549333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.549350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.549357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.549364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.549380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.559292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.559384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.559400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.026 [2024-07-15 11:39:51.559407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.026 [2024-07-15 11:39:51.559418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.026 [2024-07-15 11:39:51.559433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.026 qpair failed and we were unable to recover it. 00:29:08.026 [2024-07-15 11:39:51.569308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.026 [2024-07-15 11:39:51.569368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.026 [2024-07-15 11:39:51.569384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.027 [2024-07-15 11:39:51.569392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.027 [2024-07-15 11:39:51.569398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.027 [2024-07-15 11:39:51.569415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.027 qpair failed and we were unable to recover it. 00:29:08.027 [2024-07-15 11:39:51.579291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.027 [2024-07-15 11:39:51.579348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.027 [2024-07-15 11:39:51.579364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.027 [2024-07-15 11:39:51.579371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.027 [2024-07-15 11:39:51.579378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.027 [2024-07-15 11:39:51.579394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.027 qpair failed and we were unable to recover it. 00:29:08.027 [2024-07-15 11:39:51.589332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.027 [2024-07-15 11:39:51.589392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.027 [2024-07-15 11:39:51.589409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.027 [2024-07-15 11:39:51.589416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.027 [2024-07-15 11:39:51.589423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.027 [2024-07-15 11:39:51.589439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.027 qpair failed and we were unable to recover it. 00:29:08.027 [2024-07-15 11:39:51.599409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.027 [2024-07-15 11:39:51.599465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.027 [2024-07-15 11:39:51.599481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.027 [2024-07-15 11:39:51.599488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.027 [2024-07-15 11:39:51.599495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.027 [2024-07-15 11:39:51.599512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.027 qpair failed and we were unable to recover it. 00:29:08.027 [2024-07-15 11:39:51.609428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.027 [2024-07-15 11:39:51.609489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.027 [2024-07-15 11:39:51.609505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.027 [2024-07-15 11:39:51.609512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.027 [2024-07-15 11:39:51.609519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.027 [2024-07-15 11:39:51.609536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.027 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.619430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.619502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.619518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.619526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.619533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.619549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.629508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.629573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.629590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.629597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.629604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.629621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.639449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.639513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.639529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.639538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.639545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.639560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.649495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.649554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.649570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.649581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.649588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.649603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.659577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.659638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.659655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.659662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.659669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.659685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.669612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.669672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.669688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.669695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.669703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.669718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.679623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.679687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.679703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.679711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.679719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.679735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.689702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.689762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.689778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.689785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.689793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.689809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.699689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.699748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.699764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.699771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.699778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.699794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.709730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.709789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.709806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.709813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.709821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.709838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.719758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.719819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.719835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.719842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.719849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.719865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.729782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.729846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.729862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.729870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.729876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.729892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.286 [2024-07-15 11:39:51.739815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.286 [2024-07-15 11:39:51.739874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.286 [2024-07-15 11:39:51.739894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.286 [2024-07-15 11:39:51.739903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.286 [2024-07-15 11:39:51.739910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.286 [2024-07-15 11:39:51.739926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.286 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.749869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.749927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.749942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.749950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.749957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.749973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.759865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.759926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.759942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.759949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.759956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.759972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.769904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.769997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.770014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.770021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.770028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.770043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.779961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.780067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.780085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.780093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.780100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.780120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.789960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.790017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.790034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.790041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.790047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.790063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.799968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.800028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.800044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.800052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.800059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.800075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.810004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.810060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.810076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.810083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.810091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.810107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.820083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.820139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.820154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.820163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.820169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.820185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.830068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.830127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.830147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.830155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.830162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.830178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.840080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.840143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.840159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.840167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.840174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.840190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.850132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.850191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.850208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.850215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.850222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.850241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.860145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.860199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.860215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.860223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.860236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.860252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.287 [2024-07-15 11:39:51.870182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.287 [2024-07-15 11:39:51.870246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.287 [2024-07-15 11:39:51.870263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.287 [2024-07-15 11:39:51.870270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.287 [2024-07-15 11:39:51.870277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.287 [2024-07-15 11:39:51.870296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.287 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.880204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.880276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.880293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.880301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.880309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.546 [2024-07-15 11:39:51.880327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.546 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.890241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.890310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.890326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.890333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.890340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.546 [2024-07-15 11:39:51.890357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.546 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.900279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.900341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.900357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.900365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.900374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.546 [2024-07-15 11:39:51.900390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.546 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.910296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.910355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.910371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.910378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.910385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.546 [2024-07-15 11:39:51.910402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.546 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.920318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.920384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.920401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.920408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.920415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.546 [2024-07-15 11:39:51.920431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.546 qpair failed and we were unable to recover it. 00:29:08.546 [2024-07-15 11:39:51.930354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.546 [2024-07-15 11:39:51.930415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.546 [2024-07-15 11:39:51.930432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.546 [2024-07-15 11:39:51.930439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.546 [2024-07-15 11:39:51.930447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.930463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.940406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.940469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.940485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.940493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.940500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.940516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.950415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.950473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.950489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.950497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.950504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.950520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.960466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.960522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.960538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.960546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.960556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.960572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.970462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.970520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.970537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.970544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.970552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.970568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.980466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.980523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.980539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.980546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.980553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.980569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:51.990536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:51.990596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:51.990612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:51.990619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:51.990626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:51.990642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.000538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.000595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.000612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.000619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.000627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.000642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.010576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.010630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.010646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.010654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.010661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.010677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.020587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.020648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.020664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.020671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.020679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.020695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.030624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.030681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.030697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.030705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.030711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.030728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.040648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.040707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.040723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.040730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.040737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.040753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.050654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.050736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.050752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.050765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.050772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.050787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.060712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.060768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.060785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.060792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.060800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.547 [2024-07-15 11:39:52.060815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.547 qpair failed and we were unable to recover it. 00:29:08.547 [2024-07-15 11:39:52.070742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.547 [2024-07-15 11:39:52.070805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.547 [2024-07-15 11:39:52.070821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.547 [2024-07-15 11:39:52.070828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.547 [2024-07-15 11:39:52.070836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.070852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.080775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.080831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.080847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.080854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.080863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.080878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.090798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.090855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.090872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.090879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.090886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.090901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.100818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.100881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.100898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.100905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.100913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.100929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.110864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.110942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.110959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.110966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.110974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.110990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.120869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.120933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.120949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.120956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.120963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.120979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.548 [2024-07-15 11:39:52.130929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.548 [2024-07-15 11:39:52.130990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.548 [2024-07-15 11:39:52.131006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.548 [2024-07-15 11:39:52.131014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.548 [2024-07-15 11:39:52.131022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.548 [2024-07-15 11:39:52.131040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.548 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.141020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.141126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.141145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.141153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.141161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.141177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.151020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.151132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.151149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.151156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.151164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.151181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.161075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.161182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.161198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.161205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.161212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.161232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.171009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.171074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.171089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.171097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.171104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.171119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.181087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.181147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.181163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.181171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.181178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.181194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.191121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.191197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.191212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.191220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.191231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.191246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.201137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.201240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.201257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.201264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.201271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.201288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.211159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.211220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.211240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.211247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.211254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.211270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.221175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.221258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.221274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.221283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.221289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.221305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.231218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.231282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.231301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.231309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.231316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.231333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.241184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.241243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.807 [2024-07-15 11:39:52.241258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.807 [2024-07-15 11:39:52.241265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.807 [2024-07-15 11:39:52.241271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.807 [2024-07-15 11:39:52.241286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.807 qpair failed and we were unable to recover it. 00:29:08.807 [2024-07-15 11:39:52.251274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.807 [2024-07-15 11:39:52.251335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.251351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.251358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.251366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.251381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.261275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.261357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.261374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.261382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.261389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.261406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.271339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.271403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.271419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.271427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.271433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.271452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.281384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.281448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.281464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.281472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.281479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.281495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.291402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.291463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.291479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.291487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.291494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.291510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.301450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.301513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.301529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.301536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.301544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.301560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.311482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.311553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.311569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.311576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.311584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.311599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.321457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.321522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.321540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.321548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.321555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.321570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.331501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.331555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.331570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.331577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.331585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.331601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.341533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.341589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.341605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.341612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.341619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.341635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.351587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.351646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.351662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.351670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.351678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.351693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.361607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.361668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.361685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.361692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.361703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.361719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.371619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.371674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.371690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.371697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.371705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.371721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.381687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.381748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.381765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.381773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.381780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.808 [2024-07-15 11:39:52.381795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-07-15 11:39:52.391687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.808 [2024-07-15 11:39:52.391745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.808 [2024-07-15 11:39:52.391761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.808 [2024-07-15 11:39:52.391768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.808 [2024-07-15 11:39:52.391776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:08.809 [2024-07-15 11:39:52.391792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:08.809 qpair failed and we were unable to recover it. 00:29:09.067 [2024-07-15 11:39:52.401708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.067 [2024-07-15 11:39:52.401775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.067 [2024-07-15 11:39:52.401791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.067 [2024-07-15 11:39:52.401798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.067 [2024-07-15 11:39:52.401805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.067 [2024-07-15 11:39:52.401821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-07-15 11:39:52.411747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.067 [2024-07-15 11:39:52.411809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.067 [2024-07-15 11:39:52.411825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.067 [2024-07-15 11:39:52.411832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.067 [2024-07-15 11:39:52.411840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.067 [2024-07-15 11:39:52.411855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-07-15 11:39:52.421776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.067 [2024-07-15 11:39:52.421838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.067 [2024-07-15 11:39:52.421855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.067 [2024-07-15 11:39:52.421863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.067 [2024-07-15 11:39:52.421870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.067 [2024-07-15 11:39:52.421886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-07-15 11:39:52.431808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.067 [2024-07-15 11:39:52.431873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.067 [2024-07-15 11:39:52.431889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.067 [2024-07-15 11:39:52.431896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.067 [2024-07-15 11:39:52.431904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.067 [2024-07-15 11:39:52.431920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-07-15 11:39:52.441814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.067 [2024-07-15 11:39:52.441875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.067 [2024-07-15 11:39:52.441891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.441899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.441906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.441922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.451835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.451894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.451909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.451919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.451927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.451943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.461802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.461877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.461893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.461900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.461907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.461923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.471904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.471963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.471978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.471986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.471993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.472009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.481923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.481984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.482000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.482008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.482015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.482031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.491943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.492005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.492021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.492028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.492035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.492051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.501991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.502050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.502066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.502074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.502081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.502097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.512010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.512073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.512095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.512102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.512110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.512126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.522055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.522120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.522136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.522143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.522149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.522164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.532081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.532139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.532153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.532161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.532167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.532182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.542143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.542203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.542217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.068 [2024-07-15 11:39:52.542231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.068 [2024-07-15 11:39:52.542238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.068 [2024-07-15 11:39:52.542253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-07-15 11:39:52.552126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.068 [2024-07-15 11:39:52.552202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.068 [2024-07-15 11:39:52.552216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.552223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.552233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.552248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.562143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.562205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.562220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.562230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.562237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.562252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.572187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.572251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.572267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.572274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.572280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.572295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.582194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.582270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.582286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.582293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.582299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.582315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.592240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.592301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.592315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.592323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.592329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.592344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.602258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.602318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.602333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.602341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.602347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.602362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.612283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.612341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.612356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.612364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.612370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.612385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.622327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.622388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.622403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.622410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.622417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.622431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.632362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.632418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.632436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.632443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.632449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.632465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.642376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.642430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.642445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.642452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.642458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.642473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-07-15 11:39:52.652408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.069 [2024-07-15 11:39:52.652465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.069 [2024-07-15 11:39:52.652479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.069 [2024-07-15 11:39:52.652486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.069 [2024-07-15 11:39:52.652492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.069 [2024-07-15 11:39:52.652506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.662484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.662548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.662562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.662569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.662575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.662590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.672418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.672486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.672501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.672508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.672515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.672532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.682496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.682563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.682577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.682584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.682591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.682606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.692520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.692573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.692588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.692594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.692601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.692616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.702560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.702615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.702630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.702638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.702644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.702659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.712593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.712653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.712668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.712675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.712681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.712695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.722545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.722603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.722622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.722629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.722636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.722650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.732576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.732631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.732647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.732654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.732660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.732674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.742694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.742752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.742767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.742774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.742780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.742795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.752648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.752708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.752723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.328 [2024-07-15 11:39:52.752731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.328 [2024-07-15 11:39:52.752737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.328 [2024-07-15 11:39:52.752752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.328 qpair failed and we were unable to recover it. 00:29:09.328 [2024-07-15 11:39:52.762701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.328 [2024-07-15 11:39:52.762763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.328 [2024-07-15 11:39:52.762778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.762785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.762796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.762811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.772687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.772753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.772769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.772776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.772782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.772796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.782810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.782898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.782912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.782920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.782926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.782940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.792818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.792876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.792891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.792898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.792905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.792920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.802779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.802838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.802852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.802859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.802866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.802880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.812802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.812862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.812877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.812884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.812890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.812905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.822835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.822917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.822932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.822940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.822946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.822961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.832898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.832955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.832969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.832977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.832983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.832998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.842982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.843041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.843056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.843064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.843070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.843085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.853004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.853059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.853074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.853081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.853091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.853105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.862993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.863052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.863067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.863074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.863080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.863095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.873038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.873099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.873114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.329 [2024-07-15 11:39:52.873121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.329 [2024-07-15 11:39:52.873128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.329 [2024-07-15 11:39:52.873143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.329 qpair failed and we were unable to recover it. 00:29:09.329 [2024-07-15 11:39:52.883058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.329 [2024-07-15 11:39:52.883122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.329 [2024-07-15 11:39:52.883137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.330 [2024-07-15 11:39:52.883144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.330 [2024-07-15 11:39:52.883150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.330 [2024-07-15 11:39:52.883165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.330 qpair failed and we were unable to recover it. 00:29:09.330 [2024-07-15 11:39:52.893090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.330 [2024-07-15 11:39:52.893151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.330 [2024-07-15 11:39:52.893166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.330 [2024-07-15 11:39:52.893173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.330 [2024-07-15 11:39:52.893179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.330 [2024-07-15 11:39:52.893193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.330 qpair failed and we were unable to recover it. 00:29:09.330 [2024-07-15 11:39:52.903079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.330 [2024-07-15 11:39:52.903140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.330 [2024-07-15 11:39:52.903155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.330 [2024-07-15 11:39:52.903163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.330 [2024-07-15 11:39:52.903169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.330 [2024-07-15 11:39:52.903183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.330 qpair failed and we were unable to recover it. 00:29:09.330 [2024-07-15 11:39:52.913141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.330 [2024-07-15 11:39:52.913201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.330 [2024-07-15 11:39:52.913216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.330 [2024-07-15 11:39:52.913223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.330 [2024-07-15 11:39:52.913233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.330 [2024-07-15 11:39:52.913248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.330 qpair failed and we were unable to recover it. 00:29:09.588 [2024-07-15 11:39:52.923160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.923248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.923263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.923270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.923277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.923291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.933235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.933317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.933331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.933338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.933345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.933360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.943197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.943263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.943278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.943288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.943294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.943309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.953274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.953337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.953353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.953360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.953367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.953381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.963276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.963335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.963351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.963359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.963365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.963381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.973306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.973365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.973380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.973387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.973394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.973409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.983381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.983438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.983453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.983461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.983467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.983482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:52.993439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:52.993507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:52.993522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:52.993530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:52.993536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:52.993551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.003418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.003481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.003496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.003504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.003511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.003525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.013458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.013520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.013535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.013542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.013548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.013563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.023452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.023510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.023524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.023531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.023538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.023552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.033433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.033494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.033512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.033520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.033526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.033542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.043503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.043567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.043582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.043589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.043595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.043610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.053538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.053599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.053614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.589 [2024-07-15 11:39:53.053621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.589 [2024-07-15 11:39:53.053628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.589 [2024-07-15 11:39:53.053642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.589 qpair failed and we were unable to recover it. 00:29:09.589 [2024-07-15 11:39:53.063511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.589 [2024-07-15 11:39:53.063565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.589 [2024-07-15 11:39:53.063580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.063587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.063594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.063609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.073598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.073679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.073695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.073702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.073708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.073726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.083624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.083682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.083697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.083704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.083711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.083725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.093652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.093707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.093722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.093730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.093736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.093750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.103638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.103695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.103710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.103717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.103723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.103738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.113747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.113810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.113825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.113832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.113838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.113854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.123784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.123874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.123895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.123904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.123910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.123925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.133709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.133768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.133784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.133791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.133797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.133812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.143784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.143845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.143859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.143867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.143873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.143887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.153839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.153899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.153913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.153921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.153927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.153941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.163801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.163895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.163910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.163916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.163926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.163941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.590 [2024-07-15 11:39:53.173882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.590 [2024-07-15 11:39:53.173939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.590 [2024-07-15 11:39:53.173955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.590 [2024-07-15 11:39:53.173962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.590 [2024-07-15 11:39:53.173969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.590 [2024-07-15 11:39:53.173983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.590 qpair failed and we were unable to recover it. 00:29:09.847 [2024-07-15 11:39:53.184016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.847 [2024-07-15 11:39:53.184117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.847 [2024-07-15 11:39:53.184132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.847 [2024-07-15 11:39:53.184138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.847 [2024-07-15 11:39:53.184145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.847 [2024-07-15 11:39:53.184159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.847 qpair failed and we were unable to recover it. 00:29:09.847 [2024-07-15 11:39:53.193954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.847 [2024-07-15 11:39:53.194016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.847 [2024-07-15 11:39:53.194031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.847 [2024-07-15 11:39:53.194038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.847 [2024-07-15 11:39:53.194044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.847 [2024-07-15 11:39:53.194059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.847 qpair failed and we were unable to recover it. 00:29:09.847 [2024-07-15 11:39:53.204024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.847 [2024-07-15 11:39:53.204085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.847 [2024-07-15 11:39:53.204100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.847 [2024-07-15 11:39:53.204107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.847 [2024-07-15 11:39:53.204114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.847 [2024-07-15 11:39:53.204128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.847 qpair failed and we were unable to recover it. 00:29:09.847 [2024-07-15 11:39:53.214001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.847 [2024-07-15 11:39:53.214066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.214081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.214089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.214096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.214110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.224034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.224091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.224106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.224114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.224120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.224135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.234084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.234158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.234174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.234182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.234189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.234204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.244096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.244154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.244169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.244177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.244184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.244199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.254114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.254174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.254189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.254196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.254206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.254221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.264142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.264204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.264219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.264230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.264236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.264250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.274181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.274243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.274258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.274266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.274272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.274287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.284206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.284272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.284286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.284293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.284299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.284313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.294231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.294292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.294307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.294314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.294320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.294335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.304257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.304317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.304332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.304339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.304345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.304361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.314298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.314361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.314375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.314382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.314389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.314404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.324317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.324378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.324394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.324402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.324408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.324423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.334343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.334403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.334418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.334426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.334432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.334447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.344372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.344467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.344482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.344492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.344499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.344514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.354434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.354512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.354527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.354535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.354541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.354556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.364438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.364498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.364513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.364521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.364527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.364541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.374464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.374536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.374551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.374558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.374565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.374579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.384498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.384554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.384569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.848 [2024-07-15 11:39:53.384577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.848 [2024-07-15 11:39:53.384583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.848 [2024-07-15 11:39:53.384597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.848 qpair failed and we were unable to recover it. 00:29:09.848 [2024-07-15 11:39:53.394527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.848 [2024-07-15 11:39:53.394583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.848 [2024-07-15 11:39:53.394598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.849 [2024-07-15 11:39:53.394604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.849 [2024-07-15 11:39:53.394611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.849 [2024-07-15 11:39:53.394626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.849 qpair failed and we were unable to recover it. 00:29:09.849 [2024-07-15 11:39:53.404554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.849 [2024-07-15 11:39:53.404613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.849 [2024-07-15 11:39:53.404628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.849 [2024-07-15 11:39:53.404635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.849 [2024-07-15 11:39:53.404642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.849 [2024-07-15 11:39:53.404656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.849 qpair failed and we were unable to recover it. 00:29:09.849 [2024-07-15 11:39:53.414641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.849 [2024-07-15 11:39:53.414704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.849 [2024-07-15 11:39:53.414719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.849 [2024-07-15 11:39:53.414726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.849 [2024-07-15 11:39:53.414732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.849 [2024-07-15 11:39:53.414747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.849 qpair failed and we were unable to recover it. 00:29:09.849 [2024-07-15 11:39:53.424644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.849 [2024-07-15 11:39:53.424698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.849 [2024-07-15 11:39:53.424712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.849 [2024-07-15 11:39:53.424720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.849 [2024-07-15 11:39:53.424726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.849 [2024-07-15 11:39:53.424741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.849 qpair failed and we were unable to recover it. 00:29:09.849 [2024-07-15 11:39:53.434654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.849 [2024-07-15 11:39:53.434718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.849 [2024-07-15 11:39:53.434735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.849 [2024-07-15 11:39:53.434743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.849 [2024-07-15 11:39:53.434749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:09.849 [2024-07-15 11:39:53.434764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.849 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.444673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.444743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.444758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.444765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.444772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.444787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.454706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.454761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.454775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.454783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.454789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.454803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.464743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.464807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.464821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.464828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.464834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.464848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.474770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.474830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.474846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.474853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.474859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.474877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.484783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.484843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.484857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.484865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.484871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.484885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.494845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.494951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.494965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.494973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.494980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.494994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.504856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.504915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.504930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.504937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.504943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.106 [2024-07-15 11:39:53.504958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-07-15 11:39:53.514879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.106 [2024-07-15 11:39:53.514939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.106 [2024-07-15 11:39:53.514953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.106 [2024-07-15 11:39:53.514961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.106 [2024-07-15 11:39:53.514967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6260000b90 00:29:10.107 [2024-07-15 11:39:53.514981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-07-15 11:39:53.524939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.107 [2024-07-15 11:39:53.525069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.107 [2024-07-15 11:39:53.525131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.107 [2024-07-15 11:39:53.525156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.107 [2024-07-15 11:39:53.525176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6258000b90 00:29:10.107 [2024-07-15 11:39:53.525223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-07-15 11:39:53.534952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.107 [2024-07-15 11:39:53.535053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.107 [2024-07-15 11:39:53.535082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.107 [2024-07-15 11:39:53.535097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.107 [2024-07-15 11:39:53.535110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6258000b90 00:29:10.107 [2024-07-15 11:39:53.535140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-07-15 11:39:53.535255] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:10.107 A controller has encountered a failure and is being reset. 00:29:10.107 Controller properly reset. 00:29:10.107 Initializing NVMe Controllers 00:29:10.107 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:10.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:10.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:10.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:10.107 Initialization complete. Launching workers. 00:29:10.107 Starting thread on core 1 00:29:10.107 Starting thread on core 2 00:29:10.107 Starting thread on core 3 00:29:10.107 Starting thread on core 0 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:10.107 00:29:10.107 real 0m11.413s 00:29:10.107 user 0m21.382s 00:29:10.107 sys 0m4.515s 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.107 ************************************ 00:29:10.107 END TEST nvmf_target_disconnect_tc2 00:29:10.107 ************************************ 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:10.107 rmmod nvme_tcp 00:29:10.107 rmmod nvme_fabrics 00:29:10.107 rmmod nvme_keyring 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 766303 ']' 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 766303 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 766303 ']' 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 766303 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.107 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 766303 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 766303' 00:29:10.365 killing process with pid 766303 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 766303 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 766303 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.365 11:39:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.902 11:39:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.902 00:29:12.902 real 0m19.857s 00:29:12.902 user 0m48.952s 00:29:12.902 sys 0m9.180s 00:29:12.902 11:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.902 11:39:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:12.902 ************************************ 00:29:12.902 END TEST nvmf_target_disconnect 00:29:12.902 ************************************ 00:29:12.902 11:39:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.902 11:39:56 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:12.902 11:39:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.902 11:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.902 11:39:56 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:12.902 00:29:12.902 real 21m28.596s 00:29:12.902 user 45m38.508s 00:29:12.902 sys 6m42.892s 00:29:12.902 11:39:56 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.902 11:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.902 ************************************ 00:29:12.902 END TEST nvmf_tcp 00:29:12.902 ************************************ 00:29:12.902 11:39:56 -- common/autotest_common.sh@1142 -- # return 0 00:29:12.902 11:39:56 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:12.902 11:39:56 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:12.902 11:39:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.902 11:39:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.902 11:39:56 -- common/autotest_common.sh@10 -- # set +x 00:29:12.902 ************************************ 00:29:12.902 START TEST spdkcli_nvmf_tcp 00:29:12.902 ************************************ 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:12.902 * Looking for test storage... 00:29:12.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.902 11:39:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=767928 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 767928 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 767928 ']' 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.903 11:39:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.903 [2024-07-15 11:39:56.300272] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:29:12.903 [2024-07-15 11:39:56.300325] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767928 ] 00:29:12.903 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.903 [2024-07-15 11:39:56.366097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:12.903 [2024-07-15 11:39:56.446329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.903 [2024-07-15 11:39:56.446331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.839 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.839 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:13.839 11:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 11:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:13.840 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:13.840 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:13.840 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:13.840 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:13.840 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:13.840 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:13.840 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:13.840 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:13.840 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:13.840 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:13.840 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:13.840 ' 00:29:16.373 [2024-07-15 11:39:59.722417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.754 [2024-07-15 11:40:01.006700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:20.288 [2024-07-15 11:40:03.394037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:22.193 [2024-07-15 11:40:05.448463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:23.585 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:23.585 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:23.585 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:23.585 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:23.585 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:23.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:23.585 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:23.585 11:40:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.152 11:40:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:24.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:24.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:24.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:24.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:24.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:24.152 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:24.152 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:24.152 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:24.152 ' 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:29.418 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:29.418 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:29.418 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:29.418 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:29.418 11:40:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:29.418 11:40:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.418 11:40:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.418 11:40:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 767928 00:29:29.418 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 767928 ']' 00:29:29.418 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 767928' 00:29:29.677 killing process with pid 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 767928 ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 767928 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 767928 ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 767928 00:29:29.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (767928) - No such process 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 767928 is not found' 00:29:29.677 Process with pid 767928 is not found 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:29.677 00:29:29.677 real 0m17.105s 00:29:29.677 user 0m37.183s 00:29:29.677 sys 0m0.872s 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.677 11:40:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.677 ************************************ 00:29:29.677 END TEST spdkcli_nvmf_tcp 00:29:29.677 ************************************ 00:29:29.935 11:40:13 -- common/autotest_common.sh@1142 -- # return 0 00:29:29.935 11:40:13 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:29.935 11:40:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:29.935 11:40:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.935 11:40:13 -- common/autotest_common.sh@10 -- # set +x 00:29:29.935 ************************************ 00:29:29.935 START TEST nvmf_identify_passthru 00:29:29.935 ************************************ 00:29:29.935 11:40:13 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:29.935 * Looking for test storage... 00:29:29.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.935 11:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.935 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.936 11:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.936 11:40:13 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:29.936 11:40:13 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.936 11:40:13 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.936 11:40:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:29.936 11:40:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:29.936 11:40:13 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:29.936 11:40:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.272 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:35.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:35.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:35.531 Found net devices under 0000:86:00.0: cvl_0_0 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:35.531 Found net devices under 0000:86:00.1: cvl_0_1 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.531 11:40:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.531 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:35.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:35.789 00:29:35.789 --- 10.0.0.2 ping statistics --- 00:29:35.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.789 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:35.789 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:35.789 00:29:35.789 --- 10.0.0.1 ping statistics --- 00:29:35.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.789 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:35.790 11:40:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:29:35.790 11:40:19 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:35.790 11:40:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:35.790 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.978 11:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:39.978 11:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:39.978 11:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:39.978 11:40:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:39.978 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=775085 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.168 11:40:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 775085 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 775085 ']' 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.168 11:40:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.168 [2024-07-15 11:40:27.616382] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:29:44.168 [2024-07-15 11:40:27.616430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.168 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.168 [2024-07-15 11:40:27.687116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.426 [2024-07-15 11:40:27.768065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.426 [2024-07-15 11:40:27.768101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.426 [2024-07-15 11:40:27.768107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.426 [2024-07-15 11:40:27.768113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.426 [2024-07-15 11:40:27.768118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.426 [2024-07-15 11:40:27.768160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.426 [2024-07-15 11:40:27.768281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.426 [2024-07-15 11:40:27.768318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.426 [2024-07-15 11:40:27.768319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:44.990 11:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.990 INFO: Log level set to 20 00:29:44.990 INFO: Requests: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "method": "nvmf_set_config", 00:29:44.990 "id": 1, 00:29:44.990 "params": { 00:29:44.990 "admin_cmd_passthru": { 00:29:44.990 "identify_ctrlr": true 00:29:44.990 } 00:29:44.990 } 00:29:44.990 } 00:29:44.990 00:29:44.990 INFO: response: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "id": 1, 00:29:44.990 "result": true 00:29:44.990 } 00:29:44.990 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.990 11:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.990 INFO: Setting log level to 20 00:29:44.990 INFO: Setting log level to 20 00:29:44.990 INFO: Log level set to 20 00:29:44.990 INFO: Log level set to 20 00:29:44.990 INFO: Requests: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "method": "framework_start_init", 00:29:44.990 "id": 1 00:29:44.990 } 00:29:44.990 00:29:44.990 INFO: Requests: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "method": "framework_start_init", 00:29:44.990 "id": 1 00:29:44.990 } 00:29:44.990 00:29:44.990 [2024-07-15 11:40:28.513079] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:44.990 INFO: response: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "id": 1, 00:29:44.990 "result": true 00:29:44.990 } 00:29:44.990 00:29:44.990 INFO: response: 00:29:44.990 { 00:29:44.990 "jsonrpc": "2.0", 00:29:44.990 "id": 1, 00:29:44.990 "result": true 00:29:44.990 } 00:29:44.990 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.990 11:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.990 INFO: Setting log level to 40 00:29:44.990 INFO: Setting log level to 40 00:29:44.990 INFO: Setting log level to 40 00:29:44.990 [2024-07-15 11:40:28.526441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.990 11:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.990 11:40:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.990 11:40:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 Nvme0n1 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 [2024-07-15 11:40:31.424931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 [ 00:29:48.267 { 00:29:48.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:48.267 "subtype": "Discovery", 00:29:48.267 "listen_addresses": [], 00:29:48.267 "allow_any_host": true, 00:29:48.267 "hosts": [] 00:29:48.267 }, 00:29:48.267 { 00:29:48.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.267 "subtype": "NVMe", 00:29:48.267 "listen_addresses": [ 00:29:48.267 { 00:29:48.267 "trtype": "TCP", 00:29:48.267 "adrfam": "IPv4", 00:29:48.267 "traddr": "10.0.0.2", 00:29:48.267 "trsvcid": "4420" 00:29:48.267 } 00:29:48.267 ], 00:29:48.267 "allow_any_host": true, 00:29:48.267 "hosts": [], 00:29:48.267 "serial_number": "SPDK00000000000001", 00:29:48.267 "model_number": "SPDK bdev Controller", 00:29:48.267 "max_namespaces": 1, 00:29:48.267 "min_cntlid": 1, 00:29:48.267 "max_cntlid": 65519, 00:29:48.267 "namespaces": [ 00:29:48.267 { 00:29:48.267 "nsid": 1, 00:29:48.267 "bdev_name": "Nvme0n1", 00:29:48.267 "name": "Nvme0n1", 00:29:48.267 "nguid": "6048743D1A9B4BF39AD0EAD408E1CA2A", 00:29:48.267 "uuid": "6048743d-1a9b-4bf3-9ad0-ead408e1ca2a" 00:29:48.267 } 00:29:48.267 ] 00:29:48.267 } 00:29:48.267 ] 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:48.267 11:40:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:48.267 rmmod nvme_tcp 00:29:48.267 rmmod nvme_fabrics 00:29:48.267 rmmod nvme_keyring 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 775085 ']' 00:29:48.267 11:40:31 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 775085 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 775085 ']' 00:29:48.267 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 775085 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775085 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775085' 00:29:48.523 killing process with pid 775085 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 775085 00:29:48.523 11:40:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 775085 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.893 11:40:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.893 11:40:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.893 11:40:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.431 11:40:35 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:52.431 00:29:52.431 real 0m22.139s 00:29:52.431 user 0m29.988s 00:29:52.431 sys 0m5.076s 00:29:52.431 11:40:35 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.431 11:40:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.431 ************************************ 00:29:52.431 END TEST nvmf_identify_passthru 00:29:52.431 ************************************ 00:29:52.431 11:40:35 -- common/autotest_common.sh@1142 -- # return 0 00:29:52.431 11:40:35 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:52.431 11:40:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.431 11:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.431 11:40:35 -- common/autotest_common.sh@10 -- # set +x 00:29:52.431 ************************************ 00:29:52.431 START TEST nvmf_dif 00:29:52.431 ************************************ 00:29:52.431 11:40:35 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:52.431 * Looking for test storage... 00:29:52.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.431 11:40:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.431 11:40:35 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.431 11:40:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.431 11:40:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.432 11:40:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.432 11:40:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.432 11:40:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.432 11:40:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.432 11:40:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:52.432 11:40:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.432 11:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:52.432 11:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:52.432 11:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:52.432 11:40:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:52.432 11:40:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.432 11:40:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:52.432 11:40:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.432 11:40:35 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.432 11:40:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:57.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:57.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:57.760 Found net devices under 0000:86:00.0: cvl_0_0 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:57.760 Found net devices under 0000:86:00.1: cvl_0_1 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:29:57.760 00:29:57.760 --- 10.0.0.2 ping statistics --- 00:29:57.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.760 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:57.760 00:29:57.760 --- 10.0.0.1 ping statistics --- 00:29:57.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.760 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:57.760 11:40:41 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:01.051 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:01.051 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:01.051 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:01.051 11:40:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:01.051 11:40:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=780660 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:01.051 11:40:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 780660 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 780660 ']' 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.051 11:40:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.051 [2024-07-15 11:40:44.187754] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:30:01.051 [2024-07-15 11:40:44.187793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.051 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.051 [2024-07-15 11:40:44.243152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.051 [2024-07-15 11:40:44.320643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.051 [2024-07-15 11:40:44.320684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.051 [2024-07-15 11:40:44.320691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.051 [2024-07-15 11:40:44.320697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.051 [2024-07-15 11:40:44.320701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.051 [2024-07-15 11:40:44.320720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:01.621 11:40:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 11:40:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.621 11:40:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:01.621 11:40:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 [2024-07-15 11:40:45.059055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.621 11:40:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 ************************************ 00:30:01.621 START TEST fio_dif_1_default 00:30:01.621 ************************************ 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 bdev_null0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.621 [2024-07-15 11:40:45.131366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:01.621 { 00:30:01.621 "params": { 00:30:01.621 "name": "Nvme$subsystem", 00:30:01.621 "trtype": "$TEST_TRANSPORT", 00:30:01.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.621 "adrfam": "ipv4", 00:30:01.621 "trsvcid": "$NVMF_PORT", 00:30:01.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.621 "hdgst": ${hdgst:-false}, 00:30:01.621 "ddgst": ${ddgst:-false} 00:30:01.621 }, 00:30:01.621 "method": "bdev_nvme_attach_controller" 00:30:01.621 } 00:30:01.621 EOF 00:30:01.621 )") 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:01.621 "params": { 00:30:01.621 "name": "Nvme0", 00:30:01.621 "trtype": "tcp", 00:30:01.621 "traddr": "10.0.0.2", 00:30:01.621 "adrfam": "ipv4", 00:30:01.621 "trsvcid": "4420", 00:30:01.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.621 "hdgst": false, 00:30:01.621 "ddgst": false 00:30:01.621 }, 00:30:01.621 "method": "bdev_nvme_attach_controller" 00:30:01.621 }' 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:01.621 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:01.622 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:01.622 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:01.622 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:01.622 11:40:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:02.193 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:02.193 fio-3.35 00:30:02.193 Starting 1 thread 00:30:02.193 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.419 00:30:14.419 filename0: (groupid=0, jobs=1): err= 0: pid=781037: Mon Jul 15 11:40:56 2024 00:30:14.419 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:30:14.419 slat (nsec): min=5901, max=50933, avg=6276.41, stdev=2119.55 00:30:14.419 clat (usec): min=40828, max=48121, avg=41028.65, stdev=476.26 00:30:14.419 lat (usec): min=40834, max=48164, avg=41034.93, stdev=477.00 00:30:14.419 clat percentiles (usec): 00:30:14.419 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:14.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:14.419 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:14.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:30:14.419 | 99.99th=[47973] 00:30:14.419 bw ( KiB/s): min= 384, max= 416, per=99.54%, avg=388.80, stdev=11.72, samples=20 00:30:14.419 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:14.419 lat (msec) : 50=100.00% 00:30:14.419 cpu : usr=94.38%, sys=5.36%, ctx=15, majf=0, minf=251 00:30:14.419 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.419 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.419 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:14.419 00:30:14.419 Run status group 0 (all jobs): 00:30:14.419 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10016-10016msec 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.419 00:30:14.419 real 0m11.224s 00:30:14.419 user 0m16.471s 00:30:14.419 sys 0m0.874s 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.419 ************************************ 00:30:14.419 END TEST fio_dif_1_default 00:30:14.419 ************************************ 00:30:14.419 11:40:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:14.419 11:40:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:14.419 11:40:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:14.419 11:40:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.419 11:40:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:14.419 ************************************ 00:30:14.419 START TEST fio_dif_1_multi_subsystems 00:30:14.419 ************************************ 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:14.419 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 bdev_null0 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 [2024-07-15 11:40:56.422015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 bdev_null1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.420 { 00:30:14.420 "params": { 00:30:14.420 "name": "Nvme$subsystem", 00:30:14.420 "trtype": "$TEST_TRANSPORT", 00:30:14.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.420 "adrfam": "ipv4", 00:30:14.420 "trsvcid": "$NVMF_PORT", 00:30:14.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.420 "hdgst": ${hdgst:-false}, 00:30:14.420 "ddgst": ${ddgst:-false} 00:30:14.420 }, 00:30:14.420 "method": "bdev_nvme_attach_controller" 00:30:14.420 } 00:30:14.420 EOF 00:30:14.420 )") 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.420 { 00:30:14.420 "params": { 00:30:14.420 "name": "Nvme$subsystem", 00:30:14.420 "trtype": "$TEST_TRANSPORT", 00:30:14.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.420 "adrfam": "ipv4", 00:30:14.420 "trsvcid": "$NVMF_PORT", 00:30:14.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.420 "hdgst": ${hdgst:-false}, 00:30:14.420 "ddgst": ${ddgst:-false} 00:30:14.420 }, 00:30:14.420 "method": "bdev_nvme_attach_controller" 00:30:14.420 } 00:30:14.420 EOF 00:30:14.420 )") 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:14.420 "params": { 00:30:14.420 "name": "Nvme0", 00:30:14.420 "trtype": "tcp", 00:30:14.420 "traddr": "10.0.0.2", 00:30:14.420 "adrfam": "ipv4", 00:30:14.420 "trsvcid": "4420", 00:30:14.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:14.420 "hdgst": false, 00:30:14.420 "ddgst": false 00:30:14.420 }, 00:30:14.420 "method": "bdev_nvme_attach_controller" 00:30:14.420 },{ 00:30:14.420 "params": { 00:30:14.420 "name": "Nvme1", 00:30:14.420 "trtype": "tcp", 00:30:14.420 "traddr": "10.0.0.2", 00:30:14.420 "adrfam": "ipv4", 00:30:14.420 "trsvcid": "4420", 00:30:14.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:14.420 "hdgst": false, 00:30:14.420 "ddgst": false 00:30:14.420 }, 00:30:14.420 "method": "bdev_nvme_attach_controller" 00:30:14.420 }' 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:14.420 11:40:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.420 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.420 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.420 fio-3.35 00:30:14.420 Starting 2 threads 00:30:14.420 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.391 00:30:24.391 filename0: (groupid=0, jobs=1): err= 0: pid=783010: Mon Jul 15 11:41:07 2024 00:30:24.391 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:30:24.391 slat (nsec): min=6049, max=24863, avg=7754.27, stdev=2489.78 00:30:24.391 clat (usec): min=40808, max=42007, avg=41005.34, stdev=170.73 00:30:24.391 lat (usec): min=40814, max=42018, avg=41013.09, stdev=170.94 00:30:24.391 clat percentiles (usec): 00:30:24.391 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:24.391 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:24.391 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:24.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:24.391 | 99.99th=[42206] 00:30:24.391 bw ( KiB/s): min= 384, max= 416, per=33.76%, avg=388.80, stdev=11.72, samples=20 00:30:24.391 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:24.391 lat (msec) : 50=100.00% 00:30:24.391 cpu : usr=97.73%, sys=2.02%, ctx=10, majf=0, minf=141 00:30:24.391 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.391 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.391 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:24.391 filename1: (groupid=0, jobs=1): err= 0: pid=783011: Mon Jul 15 11:41:07 2024 00:30:24.391 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:30:24.391 slat (nsec): min=6008, max=43730, avg=7173.76, stdev=2182.01 00:30:24.391 clat (usec): min=504, max=42510, avg=21032.49, stdev=20398.79 00:30:24.391 lat (usec): min=511, max=42535, avg=21039.67, stdev=20398.16 00:30:24.391 clat percentiles (usec): 00:30:24.391 | 1.00th=[ 523], 5.00th=[ 529], 10.00th=[ 537], 20.00th=[ 562], 00:30:24.391 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[41157], 60.00th=[41157], 00:30:24.391 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:30:24.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:24.391 | 99.99th=[42730] 00:30:24.391 bw ( KiB/s): min= 673, max= 768, per=66.05%, avg=759.63, stdev=25.59, samples=19 00:30:24.391 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:30:24.391 lat (usec) : 750=49.26%, 1000=0.63% 00:30:24.391 lat (msec) : 50=50.11% 00:30:24.391 cpu : usr=97.69%, sys=2.04%, ctx=15, majf=0, minf=90 00:30:24.391 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.391 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.391 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:24.391 00:30:24.391 Run status group 0 (all jobs): 00:30:24.391 READ: bw=1149KiB/s (1177kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10001-10011msec 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:24.391 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 00:30:24.392 real 0m11.475s 00:30:24.392 user 0m26.267s 00:30:24.392 sys 0m0.690s 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 ************************************ 00:30:24.392 END TEST fio_dif_1_multi_subsystems 00:30:24.392 ************************************ 00:30:24.392 11:41:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:24.392 11:41:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:24.392 11:41:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:24.392 11:41:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 ************************************ 00:30:24.392 START TEST fio_dif_rand_params 00:30:24.392 ************************************ 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 bdev_null0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.392 [2024-07-15 11:41:07.969637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.392 { 00:30:24.392 "params": { 00:30:24.392 "name": "Nvme$subsystem", 00:30:24.392 "trtype": "$TEST_TRANSPORT", 00:30:24.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.392 "adrfam": "ipv4", 00:30:24.392 "trsvcid": "$NVMF_PORT", 00:30:24.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.392 "hdgst": ${hdgst:-false}, 00:30:24.392 "ddgst": ${ddgst:-false} 00:30:24.392 }, 00:30:24.392 "method": "bdev_nvme_attach_controller" 00:30:24.392 } 00:30:24.392 EOF 00:30:24.392 )") 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.392 11:41:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:24.678 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:24.678 11:41:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:24.678 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:24.678 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:24.678 11:41:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:24.678 "params": { 00:30:24.678 "name": "Nvme0", 00:30:24.678 "trtype": "tcp", 00:30:24.678 "traddr": "10.0.0.2", 00:30:24.678 "adrfam": "ipv4", 00:30:24.678 "trsvcid": "4420", 00:30:24.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.678 "hdgst": false, 00:30:24.678 "ddgst": false 00:30:24.678 }, 00:30:24.678 "method": "bdev_nvme_attach_controller" 00:30:24.678 }' 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:24.678 11:41:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.939 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:24.939 ... 00:30:24.939 fio-3.35 00:30:24.939 Starting 3 threads 00:30:24.939 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.495 00:30:31.495 filename0: (groupid=0, jobs=1): err= 0: pid=784970: Mon Jul 15 11:41:13 2024 00:30:31.495 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(165MiB/5002msec) 00:30:31.495 slat (nsec): min=6230, max=26188, avg=10360.04, stdev=2621.78 00:30:31.495 clat (usec): min=3876, max=90429, avg=11363.41, stdev=11623.18 00:30:31.495 lat (usec): min=3883, max=90440, avg=11373.77, stdev=11623.12 00:30:31.495 clat percentiles (usec): 00:30:31.495 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6390], 00:30:31.495 | 30.00th=[ 6783], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8455], 00:30:31.495 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11600], 95.00th=[47973], 00:30:31.495 | 99.00th=[49546], 99.50th=[50594], 99.90th=[88605], 99.95th=[90702], 00:30:31.495 | 99.99th=[90702] 00:30:31.495 bw ( KiB/s): min=27392, max=45056, per=29.91%, avg=33359.00, stdev=6485.89, samples=9 00:30:31.495 iops : min= 214, max= 352, avg=260.56, stdev=50.73, samples=9 00:30:31.495 lat (msec) : 4=0.08%, 10=80.06%, 20=11.37%, 50=7.73%, 100=0.76% 00:30:31.495 cpu : usr=95.34%, sys=4.36%, ctx=14, majf=0, minf=118 00:30:31.495 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.495 issued rwts: total=1319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.495 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.495 filename0: (groupid=0, jobs=1): err= 0: pid=784971: Mon Jul 15 11:41:13 2024 00:30:31.495 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(181MiB/5032msec) 00:30:31.495 slat (nsec): min=6233, max=28120, avg=10070.67, stdev=2700.71 00:30:31.495 clat (usec): min=3537, max=88751, avg=10403.55, stdev=10431.84 00:30:31.495 lat (usec): min=3544, max=88758, avg=10413.62, stdev=10432.05 00:30:31.495 clat percentiles (usec): 00:30:31.495 | 1.00th=[ 3884], 5.00th=[ 4178], 10.00th=[ 4490], 20.00th=[ 6063], 00:30:31.495 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7963], 60.00th=[ 8717], 00:30:31.495 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11731], 95.00th=[46924], 00:30:31.495 | 99.00th=[50070], 99.50th=[50070], 99.90th=[88605], 99.95th=[88605], 00:30:31.495 | 99.99th=[88605] 00:30:31.495 bw ( KiB/s): min=19968, max=51968, per=33.19%, avg=37017.60, stdev=9044.69, samples=10 00:30:31.495 iops : min= 156, max= 406, avg=289.20, stdev=70.66, samples=10 00:30:31.495 lat (msec) : 4=2.28%, 10=75.43%, 20=16.01%, 50=5.45%, 100=0.83% 00:30:31.495 cpu : usr=94.59%, sys=5.07%, ctx=13, majf=0, minf=149 00:30:31.495 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.495 issued rwts: total=1449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.495 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.496 filename0: (groupid=0, jobs=1): err= 0: pid=784972: Mon Jul 15 11:41:13 2024 00:30:31.496 read: IOPS=321, BW=40.1MiB/s (42.1MB/s)(202MiB/5032msec) 00:30:31.496 slat (nsec): min=6277, max=74177, avg=10226.58, stdev=2990.74 00:30:31.496 clat (usec): min=3568, max=51854, avg=9329.47, stdev=8764.53 00:30:31.496 lat (usec): min=3575, max=51865, avg=9339.70, stdev=8764.70 00:30:31.496 clat percentiles (usec): 00:30:31.496 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 5800], 00:30:31.496 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 8225], 00:30:31.496 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[12649], 00:30:31.496 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:30:31.496 | 99.99th=[51643] 00:30:31.496 bw ( KiB/s): min=20480, max=58112, per=37.03%, avg=41292.80, stdev=10379.18, samples=10 00:30:31.496 iops : min= 160, max= 454, avg=322.60, stdev=81.09, samples=10 00:30:31.496 lat (msec) : 4=1.49%, 10=81.81%, 20=12.25%, 50=3.28%, 100=1.18% 00:30:31.496 cpu : usr=94.63%, sys=5.07%, ctx=14, majf=0, minf=117 00:30:31.496 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.496 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.496 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.496 00:30:31.496 Run status group 0 (all jobs): 00:30:31.496 READ: bw=109MiB/s (114MB/s), 33.0MiB/s-40.1MiB/s (34.6MB/s-42.1MB/s), io=548MiB (575MB), run=5002-5032msec 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 bdev_null0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 [2024-07-15 11:41:14.131648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 bdev_null1 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 bdev_null2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.496 { 00:30:31.496 "params": { 00:30:31.496 "name": "Nvme$subsystem", 00:30:31.496 "trtype": "$TEST_TRANSPORT", 00:30:31.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.496 "adrfam": "ipv4", 00:30:31.496 "trsvcid": "$NVMF_PORT", 00:30:31.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.496 "hdgst": ${hdgst:-false}, 00:30:31.496 "ddgst": ${ddgst:-false} 00:30:31.496 }, 00:30:31.496 "method": "bdev_nvme_attach_controller" 00:30:31.496 } 00:30:31.496 EOF 00:30:31.496 )") 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:31.496 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.497 { 00:30:31.497 "params": { 00:30:31.497 "name": "Nvme$subsystem", 00:30:31.497 "trtype": "$TEST_TRANSPORT", 00:30:31.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.497 "adrfam": "ipv4", 00:30:31.497 "trsvcid": "$NVMF_PORT", 00:30:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.497 "hdgst": ${hdgst:-false}, 00:30:31.497 "ddgst": ${ddgst:-false} 00:30:31.497 }, 00:30:31.497 "method": "bdev_nvme_attach_controller" 00:30:31.497 } 00:30:31.497 EOF 00:30:31.497 )") 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.497 { 00:30:31.497 "params": { 00:30:31.497 "name": "Nvme$subsystem", 00:30:31.497 "trtype": "$TEST_TRANSPORT", 00:30:31.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.497 "adrfam": "ipv4", 00:30:31.497 "trsvcid": "$NVMF_PORT", 00:30:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.497 "hdgst": ${hdgst:-false}, 00:30:31.497 "ddgst": ${ddgst:-false} 00:30:31.497 }, 00:30:31.497 "method": "bdev_nvme_attach_controller" 00:30:31.497 } 00:30:31.497 EOF 00:30:31.497 )") 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.497 "params": { 00:30:31.497 "name": "Nvme0", 00:30:31.497 "trtype": "tcp", 00:30:31.497 "traddr": "10.0.0.2", 00:30:31.497 "adrfam": "ipv4", 00:30:31.497 "trsvcid": "4420", 00:30:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.497 "hdgst": false, 00:30:31.497 "ddgst": false 00:30:31.497 }, 00:30:31.497 "method": "bdev_nvme_attach_controller" 00:30:31.497 },{ 00:30:31.497 "params": { 00:30:31.497 "name": "Nvme1", 00:30:31.497 "trtype": "tcp", 00:30:31.497 "traddr": "10.0.0.2", 00:30:31.497 "adrfam": "ipv4", 00:30:31.497 "trsvcid": "4420", 00:30:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.497 "hdgst": false, 00:30:31.497 "ddgst": false 00:30:31.497 }, 00:30:31.497 "method": "bdev_nvme_attach_controller" 00:30:31.497 },{ 00:30:31.497 "params": { 00:30:31.497 "name": "Nvme2", 00:30:31.497 "trtype": "tcp", 00:30:31.497 "traddr": "10.0.0.2", 00:30:31.497 "adrfam": "ipv4", 00:30:31.497 "trsvcid": "4420", 00:30:31.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:31.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:31.497 "hdgst": false, 00:30:31.497 "ddgst": false 00:30:31.497 }, 00:30:31.497 "method": "bdev_nvme_attach_controller" 00:30:31.497 }' 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.497 11:41:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.497 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.497 ... 00:30:31.497 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.497 ... 00:30:31.497 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.497 ... 00:30:31.497 fio-3.35 00:30:31.497 Starting 24 threads 00:30:31.497 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.769 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786144: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=578, BW=2314KiB/s (2370kB/s)(22.7MiB/10024msec) 00:30:43.769 slat (nsec): min=7117, max=57789, avg=15321.03, stdev=6976.80 00:30:43.769 clat (usec): min=3360, max=34273, avg=27530.41, stdev=2884.60 00:30:43.769 lat (usec): min=3381, max=34327, avg=27545.73, stdev=2884.69 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[ 5145], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.769 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.769 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.769 | 99.00th=[28967], 99.50th=[28967], 99.90th=[33817], 99.95th=[34341], 00:30:43.769 | 99.99th=[34341] 00:30:43.769 bw ( KiB/s): min= 2176, max= 2872, per=4.21%, avg=2313.20, stdev=139.56, samples=20 00:30:43.769 iops : min= 544, max= 718, avg=578.30, stdev=34.89, samples=20 00:30:43.769 lat (msec) : 4=0.59%, 10=0.79%, 20=0.40%, 50=98.22% 00:30:43.769 cpu : usr=98.72%, sys=0.91%, ctx=12, majf=0, minf=101 00:30:43.769 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786146: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:30:43.769 slat (nsec): min=7448, max=48098, avg=20905.21, stdev=6008.80 00:30:43.769 clat (usec): min=15343, max=45855, avg=27838.04, stdev=930.41 00:30:43.769 lat (usec): min=15352, max=45886, avg=27858.94, stdev=930.52 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.769 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.769 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.769 | 99.00th=[28967], 99.50th=[28967], 99.90th=[37487], 99.95th=[37487], 00:30:43.769 | 99.99th=[45876] 00:30:43.769 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2278.40, stdev=52.53, samples=20 00:30:43.769 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:43.769 lat (msec) : 20=0.32%, 50=99.68% 00:30:43.769 cpu : usr=98.60%, sys=1.02%, ctx=14, majf=0, minf=83 00:30:43.769 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786147: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:30:43.769 slat (nsec): min=8895, max=75867, avg=23256.51, stdev=12070.43 00:30:43.769 clat (usec): min=9689, max=55013, avg=27771.14, stdev=1831.52 00:30:43.769 lat (usec): min=9701, max=55059, avg=27794.40, stdev=1831.84 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.769 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:30:43.769 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.769 | 99.00th=[28705], 99.50th=[29230], 99.90th=[54789], 99.95th=[54789], 00:30:43.769 | 99.99th=[54789] 00:30:43.769 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2277.05, stdev=68.52, samples=19 00:30:43.769 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:30:43.769 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:30:43.769 cpu : usr=98.83%, sys=0.78%, ctx=7, majf=0, minf=60 00:30:43.769 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786148: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:30:43.769 slat (nsec): min=9874, max=48810, avg=23069.20, stdev=5885.05 00:30:43.769 clat (usec): min=17257, max=67516, avg=27901.07, stdev=2188.73 00:30:43.769 lat (usec): min=17285, max=67541, avg=27924.14, stdev=2188.15 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.769 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.769 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.769 | 99.00th=[28705], 99.50th=[28967], 99.90th=[67634], 99.95th=[67634], 00:30:43.769 | 99.99th=[67634] 00:30:43.769 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:43.769 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:43.769 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:43.769 cpu : usr=98.90%, sys=0.72%, ctx=11, majf=0, minf=65 00:30:43.769 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786149: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=571, BW=2288KiB/s (2342kB/s)(22.4MiB/10016msec) 00:30:43.769 slat (nsec): min=7122, max=43772, avg=16335.17, stdev=4799.96 00:30:43.769 clat (usec): min=17493, max=35958, avg=27833.15, stdev=954.61 00:30:43.769 lat (usec): min=17502, max=35977, avg=27849.48, stdev=954.74 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[25297], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.769 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.769 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.769 | 99.00th=[28705], 99.50th=[29230], 99.90th=[35914], 99.95th=[35914], 00:30:43.769 | 99.99th=[35914] 00:30:43.769 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2284.80, stdev=46.89, samples=20 00:30:43.769 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:30:43.769 lat (msec) : 20=0.56%, 50=99.44% 00:30:43.769 cpu : usr=98.38%, sys=1.24%, ctx=27, majf=0, minf=125 00:30:43.769 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786150: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=584, BW=2337KiB/s (2393kB/s)(22.8MiB/10004msec) 00:30:43.769 slat (nsec): min=6752, max=46184, avg=12442.31, stdev=6707.25 00:30:43.769 clat (usec): min=9176, max=71580, avg=27336.07, stdev=4329.85 00:30:43.769 lat (usec): min=9184, max=71597, avg=27348.51, stdev=4329.09 00:30:43.769 clat percentiles (usec): 00:30:43.769 | 1.00th=[15795], 5.00th=[20841], 10.00th=[22152], 20.00th=[25035], 00:30:43.769 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.769 | 70.00th=[27919], 80.00th=[28181], 90.00th=[30802], 95.00th=[33817], 00:30:43.769 | 99.00th=[35914], 99.50th=[39584], 99.90th=[71828], 99.95th=[71828], 00:30:43.769 | 99.99th=[71828] 00:30:43.769 bw ( KiB/s): min= 2052, max= 2496, per=4.24%, avg=2326.95, stdev=99.92, samples=19 00:30:43.769 iops : min= 513, max= 624, avg=581.74, stdev=24.98, samples=19 00:30:43.769 lat (msec) : 10=0.10%, 20=2.46%, 50=97.16%, 100=0.27% 00:30:43.769 cpu : usr=98.80%, sys=0.82%, ctx=8, majf=0, minf=102 00:30:43.769 IO depths : 1=0.6%, 2=1.2%, 4=4.5%, 8=78.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:43.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 complete : 0=0.0%, 4=89.4%, 8=8.4%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.769 issued rwts: total=5844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.769 filename0: (groupid=0, jobs=1): err= 0: pid=786151: Mon Jul 15 11:41:25 2024 00:30:43.769 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.4MiB/10018msec) 00:30:43.769 slat (nsec): min=7444, max=47521, avg=21608.09, stdev=6914.88 00:30:43.769 clat (usec): min=17089, max=41440, avg=27817.39, stdev=895.10 00:30:43.770 lat (usec): min=17097, max=41454, avg=27839.00, stdev=894.83 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28705], 99.50th=[28967], 99.90th=[35914], 99.95th=[35914], 00:30:43.770 | 99.99th=[41681] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2284.80, stdev=46.89, samples=20 00:30:43.770 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:30:43.770 lat (msec) : 20=0.45%, 50=99.55% 00:30:43.770 cpu : usr=98.68%, sys=0.94%, ctx=6, majf=0, minf=73 00:30:43.770 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename0: (groupid=0, jobs=1): err= 0: pid=786152: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=575, BW=2304KiB/s (2359kB/s)(22.5MiB/10009msec) 00:30:43.770 slat (nsec): min=6171, max=45493, avg=20698.60, stdev=7541.04 00:30:43.770 clat (usec): min=9635, max=70111, avg=27605.79, stdev=2747.83 00:30:43.770 lat (usec): min=9662, max=70128, avg=27626.48, stdev=2748.33 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[17433], 5.00th=[25560], 10.00th=[27395], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[33817], 99.50th=[35914], 99.90th=[60556], 99.95th=[60556], 00:30:43.770 | 99.99th=[69731] 00:30:43.770 bw ( KiB/s): min= 2144, max= 2416, per=4.17%, avg=2292.21, stdev=63.52, samples=19 00:30:43.770 iops : min= 536, max= 604, avg=573.05, stdev=15.88, samples=19 00:30:43.770 lat (msec) : 10=0.28%, 20=1.28%, 50=98.16%, 100=0.28% 00:30:43.770 cpu : usr=98.68%, sys=0.94%, ctx=19, majf=0, minf=86 00:30:43.770 IO depths : 1=4.9%, 2=9.9%, 4=20.5%, 8=56.3%, 16=8.3%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786153: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.3MiB/10001msec) 00:30:43.770 slat (nsec): min=7086, max=77391, avg=21617.84, stdev=7394.89 00:30:43.770 clat (usec): min=16136, max=49682, avg=27806.61, stdev=1389.09 00:30:43.770 lat (usec): min=16143, max=49695, avg=27828.23, stdev=1389.40 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[22414], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28705], 99.50th=[33162], 99.90th=[46400], 99.95th=[46400], 00:30:43.770 | 99.99th=[49546] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2286.32, stdev=42.96, samples=19 00:30:43.770 iops : min= 544, max= 576, avg=571.58, stdev=10.74, samples=19 00:30:43.770 lat (msec) : 20=0.63%, 50=99.37% 00:30:43.770 cpu : usr=98.71%, sys=0.91%, ctx=13, majf=0, minf=64 00:30:43.770 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786154: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=578, BW=2314KiB/s (2370kB/s)(22.6MiB/10012msec) 00:30:43.770 slat (nsec): min=6900, max=60951, avg=15522.21, stdev=5058.16 00:30:43.770 clat (usec): min=3298, max=33825, avg=27518.25, stdev=2869.96 00:30:43.770 lat (usec): min=3309, max=33858, avg=27533.77, stdev=2870.00 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[ 5342], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28705], 99.50th=[29230], 99.90th=[32900], 99.95th=[32900], 00:30:43.770 | 99.99th=[33817] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2310.40, stdev=127.83, samples=20 00:30:43.770 iops : min= 544, max= 704, avg=577.60, stdev=31.96, samples=20 00:30:43.770 lat (msec) : 4=0.67%, 10=0.71%, 20=0.55%, 50=98.07% 00:30:43.770 cpu : usr=98.29%, sys=1.33%, ctx=18, majf=0, minf=63 00:30:43.770 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786155: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.3MiB/10019msec) 00:30:43.770 slat (nsec): min=7213, max=45206, avg=20617.06, stdev=6226.70 00:30:43.770 clat (usec): min=15975, max=45512, avg=27866.14, stdev=1197.29 00:30:43.770 lat (usec): min=15985, max=45537, avg=27886.76, stdev=1197.32 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28967], 99.50th=[33424], 99.90th=[45351], 99.95th=[45351], 00:30:43.770 | 99.99th=[45351] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2280.80, stdev=48.55, samples=20 00:30:43.770 iops : min= 544, max= 576, avg=570.20, stdev=12.14, samples=20 00:30:43.770 lat (msec) : 20=0.35%, 50=99.65% 00:30:43.770 cpu : usr=98.59%, sys=1.02%, ctx=12, majf=0, minf=80 00:30:43.770 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786156: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:30:43.770 slat (nsec): min=7804, max=41964, avg=19543.85, stdev=6112.99 00:30:43.770 clat (usec): min=16984, max=37671, avg=27858.95, stdev=825.37 00:30:43.770 lat (usec): min=17014, max=37705, avg=27878.49, stdev=825.14 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28705], 99.50th=[28967], 99.90th=[37487], 99.95th=[37487], 00:30:43.770 | 99.99th=[37487] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2278.40, stdev=52.53, samples=20 00:30:43.770 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:43.770 lat (msec) : 20=0.28%, 50=99.72% 00:30:43.770 cpu : usr=98.59%, sys=1.03%, ctx=12, majf=0, minf=98 00:30:43.770 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786158: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10016msec) 00:30:43.770 slat (nsec): min=7113, max=43954, avg=18837.59, stdev=5824.98 00:30:43.770 clat (usec): min=16911, max=45659, avg=27850.57, stdev=1309.17 00:30:43.770 lat (usec): min=16919, max=45680, avg=27869.40, stdev=1309.45 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[25035], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28967], 99.50th=[36963], 99.90th=[45351], 99.95th=[45876], 00:30:43.770 | 99.99th=[45876] 00:30:43.770 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2280.80, stdev=48.55, samples=20 00:30:43.770 iops : min= 544, max= 576, avg=570.20, stdev=12.14, samples=20 00:30:43.770 lat (msec) : 20=0.58%, 50=99.42% 00:30:43.770 cpu : usr=98.53%, sys=1.09%, ctx=16, majf=0, minf=86 00:30:43.770 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786159: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:30:43.770 slat (nsec): min=6603, max=86551, avg=22986.23, stdev=6723.22 00:30:43.770 clat (usec): min=9702, max=57273, avg=27816.47, stdev=2010.32 00:30:43.770 lat (usec): min=9718, max=57290, avg=27839.46, stdev=2010.04 00:30:43.770 clat percentiles (usec): 00:30:43.770 | 1.00th=[23725], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.770 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.770 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.770 | 99.00th=[28967], 99.50th=[33162], 99.90th=[57410], 99.95th=[57410], 00:30:43.770 | 99.99th=[57410] 00:30:43.770 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2273.26, stdev=68.94, samples=19 00:30:43.770 iops : min= 512, max= 576, avg=568.32, stdev=17.23, samples=19 00:30:43.770 lat (msec) : 10=0.28%, 20=0.32%, 50=99.12%, 100=0.28% 00:30:43.770 cpu : usr=98.67%, sys=0.95%, ctx=8, majf=0, minf=66 00:30:43.770 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:43.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.770 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.770 filename1: (groupid=0, jobs=1): err= 0: pid=786160: Mon Jul 15 11:41:25 2024 00:30:43.770 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:30:43.770 slat (nsec): min=8078, max=43601, avg=20860.44, stdev=5941.71 00:30:43.770 clat (usec): min=12729, max=42487, avg=27832.76, stdev=1004.09 00:30:43.770 lat (usec): min=12747, max=42501, avg=27853.62, stdev=1004.24 00:30:43.770 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28967], 99.50th=[28967], 99.90th=[41157], 99.95th=[41681], 00:30:43.771 | 99.99th=[42730] 00:30:43.771 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2278.40, stdev=52.53, samples=20 00:30:43.771 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:43.771 lat (msec) : 20=0.35%, 50=99.65% 00:30:43.771 cpu : usr=98.68%, sys=0.95%, ctx=9, majf=0, minf=58 00:30:43.771 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename1: (groupid=0, jobs=1): err= 0: pid=786161: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:30:43.771 slat (nsec): min=6293, max=47887, avg=23099.21, stdev=6226.06 00:30:43.771 clat (usec): min=17266, max=63340, avg=27893.72, stdev=2001.45 00:30:43.771 lat (usec): min=17286, max=63357, avg=27916.82, stdev=2000.71 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28705], 99.50th=[29230], 99.90th=[63177], 99.95th=[63177], 00:30:43.771 | 99.99th=[63177] 00:30:43.771 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2270.53, stdev=71.25, samples=19 00:30:43.771 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:30:43.771 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:43.771 cpu : usr=98.61%, sys=1.02%, ctx=12, majf=0, minf=50 00:30:43.771 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786162: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:30:43.771 slat (nsec): min=7565, max=49122, avg=19211.89, stdev=6511.22 00:30:43.771 clat (usec): min=14273, max=41690, avg=27864.74, stdev=973.56 00:30:43.771 lat (usec): min=14282, max=41704, avg=27883.95, stdev=973.39 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28967], 99.50th=[28967], 99.90th=[38011], 99.95th=[41157], 00:30:43.771 | 99.99th=[41681] 00:30:43.771 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2278.40, stdev=52.53, samples=20 00:30:43.771 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:43.771 lat (msec) : 20=0.35%, 50=99.65% 00:30:43.771 cpu : usr=98.06%, sys=1.52%, ctx=33, majf=0, minf=66 00:30:43.771 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786163: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=576, BW=2306KiB/s (2362kB/s)(22.5MiB/10004msec) 00:30:43.771 slat (nsec): min=6743, max=56575, avg=14029.11, stdev=7768.90 00:30:43.771 clat (usec): min=10020, max=85168, avg=27693.12, stdev=4259.86 00:30:43.771 lat (usec): min=10033, max=85186, avg=27707.15, stdev=4258.99 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[19792], 5.00th=[21365], 10.00th=[22676], 20.00th=[25822], 00:30:43.771 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[28181], 80.00th=[28443], 90.00th=[32375], 95.00th=[34341], 00:30:43.771 | 99.00th=[36439], 99.50th=[40633], 99.90th=[71828], 99.95th=[85459], 00:30:43.771 | 99.99th=[85459] 00:30:43.771 bw ( KiB/s): min= 1923, max= 2368, per=4.18%, avg=2297.42, stdev=94.19, samples=19 00:30:43.771 iops : min= 480, max= 592, avg=574.32, stdev=23.71, samples=19 00:30:43.771 lat (msec) : 20=1.16%, 50=98.56%, 100=0.28% 00:30:43.771 cpu : usr=98.78%, sys=0.84%, ctx=9, majf=0, minf=94 00:30:43.771 IO depths : 1=0.1%, 2=0.2%, 4=2.8%, 8=80.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=89.0%, 8=9.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786164: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10018msec) 00:30:43.771 slat (nsec): min=6854, max=46793, avg=13543.31, stdev=6221.88 00:30:43.771 clat (usec): min=15841, max=49100, avg=27788.51, stdev=1734.57 00:30:43.771 lat (usec): min=15848, max=49114, avg=27802.06, stdev=1734.86 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[20317], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[29230], 99.50th=[35914], 99.90th=[47449], 99.95th=[47449], 00:30:43.771 | 99.99th=[49021] 00:30:43.771 bw ( KiB/s): min= 2176, max= 2352, per=4.17%, avg=2292.00, stdev=41.81, samples=20 00:30:43.771 iops : min= 544, max= 588, avg=573.00, stdev=10.45, samples=20 00:30:43.771 lat (msec) : 20=0.91%, 50=99.09% 00:30:43.771 cpu : usr=98.38%, sys=1.24%, ctx=19, majf=0, minf=68 00:30:43.771 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786165: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:30:43.771 slat (nsec): min=7955, max=68843, avg=22348.41, stdev=6839.08 00:30:43.771 clat (usec): min=17245, max=64984, avg=27914.55, stdev=2059.53 00:30:43.771 lat (usec): min=17259, max=65010, avg=27936.90, stdev=2058.93 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28705], 99.50th=[28967], 99.90th=[64750], 99.95th=[64750], 00:30:43.771 | 99.99th=[64750] 00:30:43.771 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:43.771 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:43.771 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:30:43.771 cpu : usr=98.62%, sys=1.01%, ctx=7, majf=0, minf=60 00:30:43.771 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786166: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10008msec) 00:30:43.771 slat (nsec): min=6375, max=43738, avg=22914.30, stdev=5958.48 00:30:43.771 clat (usec): min=9805, max=58927, avg=27823.71, stdev=1996.85 00:30:43.771 lat (usec): min=9826, max=58944, avg=27846.62, stdev=1996.47 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28705], 99.50th=[28967], 99.90th=[58983], 99.95th=[58983], 00:30:43.771 | 99.99th=[58983] 00:30:43.771 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2270.32, stdev=71.93, samples=19 00:30:43.771 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:30:43.771 lat (msec) : 10=0.21%, 20=0.35%, 50=99.16%, 100=0.28% 00:30:43.771 cpu : usr=98.79%, sys=0.84%, ctx=14, majf=0, minf=85 00:30:43.771 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786167: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:30:43.771 slat (nsec): min=7460, max=44883, avg=20792.76, stdev=5764.83 00:30:43.771 clat (usec): min=16848, max=37922, avg=27834.92, stdev=839.32 00:30:43.771 lat (usec): min=16863, max=37939, avg=27855.71, stdev=839.39 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.771 | 99.00th=[28705], 99.50th=[28967], 99.90th=[38011], 99.95th=[38011], 00:30:43.771 | 99.99th=[38011] 00:30:43.771 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2278.40, stdev=52.53, samples=20 00:30:43.771 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:30:43.771 lat (msec) : 20=0.28%, 50=99.72% 00:30:43.771 cpu : usr=98.52%, sys=1.10%, ctx=22, majf=0, minf=77 00:30:43.771 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:43.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.771 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.771 filename2: (groupid=0, jobs=1): err= 0: pid=786168: Mon Jul 15 11:41:25 2024 00:30:43.771 read: IOPS=579, BW=2319KiB/s (2375kB/s)(22.7MiB/10016msec) 00:30:43.771 slat (nsec): min=6912, max=57789, avg=13746.47, stdev=5170.59 00:30:43.771 clat (usec): min=3405, max=34062, avg=27472.23, stdev=3122.08 00:30:43.771 lat (usec): min=3427, max=34116, avg=27485.98, stdev=3121.68 00:30:43.771 clat percentiles (usec): 00:30:43.771 | 1.00th=[ 3916], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:30:43.771 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:30:43.771 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:30:43.772 | 99.00th=[28705], 99.50th=[28967], 99.90th=[32900], 99.95th=[32900], 00:30:43.772 | 99.99th=[33817] 00:30:43.772 bw ( KiB/s): min= 2176, max= 2944, per=4.22%, avg=2316.80, stdev=154.83, samples=20 00:30:43.772 iops : min= 544, max= 736, avg=579.20, stdev=38.71, samples=20 00:30:43.772 lat (msec) : 4=1.07%, 10=0.59%, 20=0.55%, 50=97.80% 00:30:43.772 cpu : usr=98.76%, sys=0.85%, ctx=13, majf=0, minf=87 00:30:43.772 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.772 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.772 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.772 filename2: (groupid=0, jobs=1): err= 0: pid=786169: Mon Jul 15 11:41:25 2024 00:30:43.772 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10003msec) 00:30:43.772 slat (nsec): min=13289, max=74503, avg=24265.25, stdev=12885.37 00:30:43.772 clat (usec): min=9701, max=65235, avg=27768.81, stdev=1969.97 00:30:43.772 lat (usec): min=9715, max=65253, avg=27793.07, stdev=1970.81 00:30:43.772 clat percentiles (usec): 00:30:43.772 | 1.00th=[26346], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:30:43.772 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:30:43.772 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:30:43.772 | 99.00th=[28967], 99.50th=[33817], 99.90th=[54789], 99.95th=[54789], 00:30:43.772 | 99.99th=[65274] 00:30:43.772 bw ( KiB/s): min= 2048, max= 2320, per=4.15%, avg=2277.05, stdev=68.73, samples=19 00:30:43.772 iops : min= 512, max= 580, avg=569.26, stdev=17.18, samples=19 00:30:43.772 lat (msec) : 10=0.28%, 20=0.32%, 50=99.12%, 100=0.28% 00:30:43.772 cpu : usr=99.07%, sys=0.52%, ctx=14, majf=0, minf=70 00:30:43.772 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:43.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.772 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.772 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.772 00:30:43.772 Run status group 0 (all jobs): 00:30:43.772 READ: bw=53.6MiB/s (56.2MB/s), 2277KiB/s-2337KiB/s (2332kB/s-2393kB/s), io=538MiB (564MB), run=10001-10024msec 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 bdev_null0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 [2024-07-15 11:41:25.695860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 bdev_null1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.772 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.772 { 00:30:43.772 "params": { 00:30:43.772 "name": "Nvme$subsystem", 00:30:43.772 "trtype": "$TEST_TRANSPORT", 00:30:43.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.772 "adrfam": "ipv4", 00:30:43.772 "trsvcid": "$NVMF_PORT", 00:30:43.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.772 "hdgst": ${hdgst:-false}, 00:30:43.772 "ddgst": ${ddgst:-false} 00:30:43.772 }, 00:30:43.772 "method": "bdev_nvme_attach_controller" 00:30:43.772 } 00:30:43.772 EOF 00:30:43.772 )") 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.773 { 00:30:43.773 "params": { 00:30:43.773 "name": "Nvme$subsystem", 00:30:43.773 "trtype": "$TEST_TRANSPORT", 00:30:43.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.773 "adrfam": "ipv4", 00:30:43.773 "trsvcid": "$NVMF_PORT", 00:30:43.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.773 "hdgst": ${hdgst:-false}, 00:30:43.773 "ddgst": ${ddgst:-false} 00:30:43.773 }, 00:30:43.773 "method": "bdev_nvme_attach_controller" 00:30:43.773 } 00:30:43.773 EOF 00:30:43.773 )") 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.773 "params": { 00:30:43.773 "name": "Nvme0", 00:30:43.773 "trtype": "tcp", 00:30:43.773 "traddr": "10.0.0.2", 00:30:43.773 "adrfam": "ipv4", 00:30:43.773 "trsvcid": "4420", 00:30:43.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.773 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.773 "hdgst": false, 00:30:43.773 "ddgst": false 00:30:43.773 }, 00:30:43.773 "method": "bdev_nvme_attach_controller" 00:30:43.773 },{ 00:30:43.773 "params": { 00:30:43.773 "name": "Nvme1", 00:30:43.773 "trtype": "tcp", 00:30:43.773 "traddr": "10.0.0.2", 00:30:43.773 "adrfam": "ipv4", 00:30:43.773 "trsvcid": "4420", 00:30:43.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.773 "hdgst": false, 00:30:43.773 "ddgst": false 00:30:43.773 }, 00:30:43.773 "method": "bdev_nvme_attach_controller" 00:30:43.773 }' 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.773 11:41:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.773 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:43.773 ... 00:30:43.773 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:43.773 ... 00:30:43.773 fio-3.35 00:30:43.773 Starting 4 threads 00:30:43.773 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.048 00:30:49.048 filename0: (groupid=0, jobs=1): err= 0: pid=788048: Mon Jul 15 11:41:31 2024 00:30:49.048 read: IOPS=2624, BW=20.5MiB/s (21.5MB/s)(103MiB/5003msec) 00:30:49.048 slat (nsec): min=6159, max=63004, avg=13575.29, stdev=8064.74 00:30:49.048 clat (usec): min=884, max=43466, avg=3008.33, stdev=1111.55 00:30:49.048 lat (usec): min=906, max=43503, avg=3021.90, stdev=1111.61 00:30:49.048 clat percentiles (usec): 00:30:49.048 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:30:49.048 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3032], 00:30:49.048 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3523], 95.00th=[ 3818], 00:30:49.048 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[43254], 00:30:49.048 | 99.99th=[43254] 00:30:49.048 bw ( KiB/s): min=19616, max=22080, per=26.09%, avg=20961.78, stdev=794.36, samples=9 00:30:49.048 iops : min= 2452, max= 2760, avg=2620.22, stdev=99.29, samples=9 00:30:49.048 lat (usec) : 1000=0.01% 00:30:49.048 lat (msec) : 2=1.52%, 4=95.14%, 10=3.27%, 50=0.06% 00:30:49.048 cpu : usr=97.64%, sys=1.98%, ctx=9, majf=0, minf=83 00:30:49.048 IO depths : 1=0.1%, 2=6.7%, 4=63.9%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 issued rwts: total=13130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:49.048 filename0: (groupid=0, jobs=1): err= 0: pid=788049: Mon Jul 15 11:41:31 2024 00:30:49.048 read: IOPS=2478, BW=19.4MiB/s (20.3MB/s)(96.9MiB/5002msec) 00:30:49.048 slat (usec): min=5, max=119, avg=13.57, stdev= 9.75 00:30:49.048 clat (usec): min=627, max=6478, avg=3186.07, stdev=568.63 00:30:49.048 lat (usec): min=637, max=6487, avg=3199.65, stdev=568.43 00:30:49.048 clat percentiles (usec): 00:30:49.048 | 1.00th=[ 1680], 5.00th=[ 2409], 10.00th=[ 2638], 20.00th=[ 2835], 00:30:49.048 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3228], 00:30:49.048 | 70.00th=[ 3359], 80.00th=[ 3523], 90.00th=[ 3884], 95.00th=[ 4293], 00:30:49.048 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 6063], 00:30:49.048 | 99.99th=[ 6456] 00:30:49.048 bw ( KiB/s): min=17504, max=21739, per=24.62%, avg=19786.11, stdev=1517.05, samples=9 00:30:49.048 iops : min= 2188, max= 2717, avg=2473.22, stdev=189.57, samples=9 00:30:49.048 lat (usec) : 750=0.01%, 1000=0.06% 00:30:49.048 lat (msec) : 2=1.75%, 4=89.96%, 10=8.22% 00:30:49.048 cpu : usr=97.16%, sys=2.44%, ctx=8, majf=0, minf=93 00:30:49.048 IO depths : 1=0.1%, 2=3.7%, 4=68.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 issued rwts: total=12398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:49.048 filename1: (groupid=0, jobs=1): err= 0: pid=788050: Mon Jul 15 11:41:31 2024 00:30:49.048 read: IOPS=2494, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5002msec) 00:30:49.048 slat (usec): min=5, max=104, avg=14.23, stdev=10.00 00:30:49.048 clat (usec): min=1196, max=43691, avg=3165.52, stdev=1164.16 00:30:49.048 lat (usec): min=1224, max=43724, avg=3179.75, stdev=1164.31 00:30:49.048 clat percentiles (usec): 00:30:49.048 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2769], 00:30:49.048 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3195], 00:30:49.048 | 70.00th=[ 3326], 80.00th=[ 3458], 90.00th=[ 3785], 95.00th=[ 4146], 00:30:49.048 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 5997], 99.95th=[43779], 00:30:49.048 | 99.99th=[43779] 00:30:49.048 bw ( KiB/s): min=17296, max=22320, per=24.62%, avg=19779.56, stdev=1728.88, samples=9 00:30:49.048 iops : min= 2162, max= 2790, avg=2472.44, stdev=216.11, samples=9 00:30:49.048 lat (msec) : 2=0.80%, 4=92.75%, 10=6.39%, 50=0.06% 00:30:49.048 cpu : usr=95.80%, sys=2.98%, ctx=53, majf=0, minf=145 00:30:49.048 IO depths : 1=0.1%, 2=3.5%, 4=66.9%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 issued rwts: total=12477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:49.048 filename1: (groupid=0, jobs=1): err= 0: pid=788051: Mon Jul 15 11:41:31 2024 00:30:49.048 read: IOPS=2447, BW=19.1MiB/s (20.0MB/s)(95.7MiB/5003msec) 00:30:49.048 slat (usec): min=6, max=172, avg=12.95, stdev= 8.70 00:30:49.048 clat (usec): min=528, max=6159, avg=3229.33, stdev=567.45 00:30:49.048 lat (usec): min=535, max=6171, avg=3242.28, stdev=566.99 00:30:49.048 clat percentiles (usec): 00:30:49.048 | 1.00th=[ 1745], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2868], 00:30:49.048 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3261], 00:30:49.048 | 70.00th=[ 3392], 80.00th=[ 3523], 90.00th=[ 3851], 95.00th=[ 4293], 00:30:49.048 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5932], 99.95th=[ 6063], 00:30:49.048 | 99.99th=[ 6128] 00:30:49.048 bw ( KiB/s): min=17296, max=21120, per=24.24%, avg=19479.11, stdev=1463.85, samples=9 00:30:49.048 iops : min= 2162, max= 2640, avg=2434.89, stdev=182.98, samples=9 00:30:49.048 lat (usec) : 750=0.03%, 1000=0.02% 00:30:49.048 lat (msec) : 2=1.54%, 4=89.92%, 10=8.49% 00:30:49.048 cpu : usr=97.34%, sys=2.28%, ctx=10, majf=0, minf=59 00:30:49.048 IO depths : 1=0.2%, 2=3.2%, 4=69.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.048 issued rwts: total=12244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.048 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:49.048 00:30:49.048 Run status group 0 (all jobs): 00:30:49.048 READ: bw=78.5MiB/s (82.3MB/s), 19.1MiB/s-20.5MiB/s (20.0MB/s-21.5MB/s), io=393MiB (412MB), run=5002-5003msec 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.048 11:41:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.048 00:30:49.048 real 0m24.081s 00:30:49.048 user 4m52.197s 00:30:49.048 sys 0m4.498s 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:49.048 11:41:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.048 ************************************ 00:30:49.048 END TEST fio_dif_rand_params 00:30:49.048 ************************************ 00:30:49.048 11:41:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:49.048 11:41:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:49.048 11:41:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:49.048 11:41:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.049 11:41:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:49.049 ************************************ 00:30:49.049 START TEST fio_dif_digest 00:30:49.049 ************************************ 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:49.049 bdev_null0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:49.049 [2024-07-15 11:41:32.124637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.049 { 00:30:49.049 "params": { 00:30:49.049 "name": "Nvme$subsystem", 00:30:49.049 "trtype": "$TEST_TRANSPORT", 00:30:49.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.049 "adrfam": "ipv4", 00:30:49.049 "trsvcid": "$NVMF_PORT", 00:30:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.049 "hdgst": ${hdgst:-false}, 00:30:49.049 "ddgst": ${ddgst:-false} 00:30:49.049 }, 00:30:49.049 "method": "bdev_nvme_attach_controller" 00:30:49.049 } 00:30:49.049 EOF 00:30:49.049 )") 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:49.049 "params": { 00:30:49.049 "name": "Nvme0", 00:30:49.049 "trtype": "tcp", 00:30:49.049 "traddr": "10.0.0.2", 00:30:49.049 "adrfam": "ipv4", 00:30:49.049 "trsvcid": "4420", 00:30:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.049 "hdgst": true, 00:30:49.049 "ddgst": true 00:30:49.049 }, 00:30:49.049 "method": "bdev_nvme_attach_controller" 00:30:49.049 }' 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:49.049 11:41:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.049 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:49.049 ... 00:30:49.049 fio-3.35 00:30:49.049 Starting 3 threads 00:30:49.049 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.249 00:31:01.249 filename0: (groupid=0, jobs=1): err= 0: pid=789267: Mon Jul 15 11:41:43 2024 00:31:01.249 read: IOPS=291, BW=36.5MiB/s (38.2MB/s)(365MiB/10006msec) 00:31:01.249 slat (nsec): min=6505, max=57016, avg=15835.55, stdev=6852.37 00:31:01.249 clat (usec): min=7918, max=12943, avg=10261.14, stdev=734.54 00:31:01.249 lat (usec): min=7930, max=12951, avg=10276.97, stdev=734.80 00:31:01.249 clat percentiles (usec): 00:31:01.249 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:31:01.249 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:31:01.249 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:31:01.249 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12649], 99.95th=[12780], 00:31:01.249 | 99.99th=[12911] 00:31:01.249 bw ( KiB/s): min=36096, max=39680, per=34.75%, avg=37349.05, stdev=898.62, samples=19 00:31:01.249 iops : min= 282, max= 310, avg=291.79, stdev= 7.02, samples=19 00:31:01.249 lat (msec) : 10=35.99%, 20=64.01% 00:31:01.249 cpu : usr=94.71%, sys=4.68%, ctx=345, majf=0, minf=130 00:31:01.249 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.249 issued rwts: total=2920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.249 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.249 filename0: (groupid=0, jobs=1): err= 0: pid=789268: Mon Jul 15 11:41:43 2024 00:31:01.249 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(347MiB/10046msec) 00:31:01.249 slat (nsec): min=6486, max=78553, avg=14347.56, stdev=6605.54 00:31:01.249 clat (usec): min=8175, max=50957, avg=10840.12, stdev=1284.52 00:31:01.249 lat (usec): min=8187, max=50968, avg=10854.47, stdev=1284.53 00:31:01.249 clat percentiles (usec): 00:31:01.249 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:31:01.249 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:31:01.249 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:31:01.249 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14091], 99.95th=[46924], 00:31:01.249 | 99.99th=[51119] 00:31:01.249 bw ( KiB/s): min=34304, max=37120, per=32.99%, avg=35456.00, stdev=716.90, samples=20 00:31:01.249 iops : min= 268, max= 290, avg=277.00, stdev= 5.60, samples=20 00:31:01.249 lat (msec) : 10=14.32%, 20=85.61%, 50=0.04%, 100=0.04% 00:31:01.249 cpu : usr=96.19%, sys=3.48%, ctx=26, majf=0, minf=172 00:31:01.249 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.249 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.249 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.249 filename0: (groupid=0, jobs=1): err= 0: pid=789269: Mon Jul 15 11:41:43 2024 00:31:01.249 read: IOPS=273, BW=34.1MiB/s (35.8MB/s)(343MiB/10046msec) 00:31:01.249 slat (nsec): min=6473, max=44687, avg=14307.23, stdev=6485.22 00:31:01.249 clat (usec): min=8587, max=51864, avg=10954.40, stdev=1286.46 00:31:01.249 lat (usec): min=8601, max=51876, avg=10968.70, stdev=1286.39 00:31:01.249 clat percentiles (usec): 00:31:01.249 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:31:01.249 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:31:01.249 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:31:01.250 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13960], 99.95th=[46400], 00:31:01.250 | 99.99th=[51643] 00:31:01.250 bw ( KiB/s): min=33792, max=36608, per=32.64%, avg=35084.80, stdev=672.07, samples=20 00:31:01.250 iops : min= 264, max= 286, avg=274.10, stdev= 5.25, samples=20 00:31:01.250 lat (msec) : 10=10.10%, 20=89.83%, 50=0.04%, 100=0.04% 00:31:01.250 cpu : usr=95.85%, sys=3.82%, ctx=25, majf=0, minf=113 00:31:01.250 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.250 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.250 00:31:01.250 Run status group 0 (all jobs): 00:31:01.250 READ: bw=105MiB/s (110MB/s), 34.1MiB/s-36.5MiB/s (35.8MB/s-38.2MB/s), io=1054MiB (1106MB), run=10006-10046msec 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.250 00:31:01.250 real 0m11.226s 00:31:01.250 user 0m35.525s 00:31:01.250 sys 0m1.471s 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:01.250 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.250 ************************************ 00:31:01.250 END TEST fio_dif_digest 00:31:01.250 ************************************ 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:01.250 11:41:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:01.250 11:41:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.250 rmmod nvme_tcp 00:31:01.250 rmmod nvme_fabrics 00:31:01.250 rmmod nvme_keyring 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 780660 ']' 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 780660 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 780660 ']' 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 780660 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780660 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780660' 00:31:01.250 killing process with pid 780660 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@967 -- # kill 780660 00:31:01.250 11:41:43 nvmf_dif -- common/autotest_common.sh@972 -- # wait 780660 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:01.250 11:41:43 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:03.156 Waiting for block devices as requested 00:31:03.156 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:03.156 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:03.156 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:03.156 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:03.156 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:03.156 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:03.416 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:03.416 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:03.416 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:03.675 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:03.675 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:03.675 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:03.934 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:03.934 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:03.934 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:03.934 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:04.193 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:04.193 11:41:47 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.193 11:41:47 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.193 11:41:47 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.193 11:41:47 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.193 11:41:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.193 11:41:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.193 11:41:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.727 11:41:49 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.727 00:31:06.727 real 1m14.214s 00:31:06.727 user 7m10.759s 00:31:06.727 sys 0m18.879s 00:31:06.727 11:41:49 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.727 11:41:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.727 ************************************ 00:31:06.727 END TEST nvmf_dif 00:31:06.727 ************************************ 00:31:06.727 11:41:49 -- common/autotest_common.sh@1142 -- # return 0 00:31:06.727 11:41:49 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:06.727 11:41:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:06.727 11:41:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.727 11:41:49 -- common/autotest_common.sh@10 -- # set +x 00:31:06.727 ************************************ 00:31:06.727 START TEST nvmf_abort_qd_sizes 00:31:06.727 ************************************ 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:06.727 * Looking for test storage... 00:31:06.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:06.727 11:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:12.004 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.004 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:12.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:12.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:12.005 Found net devices under 0000:86:00.0: cvl_0_0 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:12.005 Found net devices under 0000:86:00.1: cvl_0_1 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:31:12.005 00:31:12.005 --- 10.0.0.2 ping statistics --- 00:31:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.005 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:31:12.005 00:31:12.005 --- 10.0.0.1 ping statistics --- 00:31:12.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.005 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:12.005 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:15.298 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:15.298 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:15.865 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=797053 00:31:15.865 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 797053 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 797053 ']' 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.866 11:41:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.866 [2024-07-15 11:41:59.450585] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:31:15.866 [2024-07-15 11:41:59.450628] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.123 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.123 [2024-07-15 11:41:59.521452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.123 [2024-07-15 11:41:59.602686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.123 [2024-07-15 11:41:59.602723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.123 [2024-07-15 11:41:59.602733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.123 [2024-07-15 11:41:59.602740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.123 [2024-07-15 11:41:59.602746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.123 [2024-07-15 11:41:59.602794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.123 [2024-07-15 11:41:59.602904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.123 [2024-07-15 11:41:59.602986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.123 [2024-07-15 11:41:59.602987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.686 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.686 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:16.686 11:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.686 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:16.686 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.943 11:42:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.943 ************************************ 00:31:16.943 START TEST spdk_target_abort 00:31:16.943 ************************************ 00:31:16.943 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:16.943 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:16.943 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:16.943 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.943 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.217 spdk_targetn1 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.217 [2024-07-15 11:42:03.182212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.217 [2024-07-15 11:42:03.215203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.217 11:42:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.217 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.489 Initializing NVMe Controllers 00:31:23.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:23.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:23.489 Initialization complete. Launching workers. 00:31:23.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14618, failed: 0 00:31:23.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1315, failed to submit 13303 00:31:23.489 success 725, unsuccess 590, failed 0 00:31:23.489 11:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:23.489 11:42:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.489 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.761 Initializing NVMe Controllers 00:31:26.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.761 Initialization complete. Launching workers. 00:31:26.761 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8671, failed: 0 00:31:26.761 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7435 00:31:26.761 success 335, unsuccess 901, failed 0 00:31:26.761 11:42:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.761 11:42:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.761 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.032 Initializing NVMe Controllers 00:31:30.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.032 Initialization complete. Launching workers. 00:31:30.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38363, failed: 0 00:31:30.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2701, failed to submit 35662 00:31:30.032 success 591, unsuccess 2110, failed 0 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.032 11:42:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 797053 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 797053 ']' 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 797053 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 797053 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 797053' 00:31:30.963 killing process with pid 797053 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 797053 00:31:30.963 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 797053 00:31:31.221 00:31:31.221 real 0m14.222s 00:31:31.221 user 0m56.723s 00:31:31.221 sys 0m2.247s 00:31:31.221 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.221 11:42:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:31.221 ************************************ 00:31:31.221 END TEST spdk_target_abort 00:31:31.221 ************************************ 00:31:31.221 11:42:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:31.222 11:42:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:31.222 11:42:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:31.222 11:42:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.222 11:42:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:31.222 ************************************ 00:31:31.222 START TEST kernel_target_abort 00:31:31.222 ************************************ 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:31.222 11:42:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:33.802 Waiting for block devices as requested 00:31:33.802 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:34.072 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:34.072 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:34.072 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:34.330 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:34.330 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:34.330 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:34.330 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:34.589 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:34.589 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:34.589 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:34.848 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:34.848 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:34.848 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:34.848 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:35.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:35.107 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:35.107 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.365 No valid GPT data, bailing 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:35.365 00:31:35.365 Discovery Log Number of Records 2, Generation counter 2 00:31:35.365 =====Discovery Log Entry 0====== 00:31:35.365 trtype: tcp 00:31:35.365 adrfam: ipv4 00:31:35.365 subtype: current discovery subsystem 00:31:35.365 treq: not specified, sq flow control disable supported 00:31:35.365 portid: 1 00:31:35.365 trsvcid: 4420 00:31:35.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.365 traddr: 10.0.0.1 00:31:35.365 eflags: none 00:31:35.365 sectype: none 00:31:35.365 =====Discovery Log Entry 1====== 00:31:35.365 trtype: tcp 00:31:35.365 adrfam: ipv4 00:31:35.365 subtype: nvme subsystem 00:31:35.365 treq: not specified, sq flow control disable supported 00:31:35.365 portid: 1 00:31:35.365 trsvcid: 4420 00:31:35.365 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:35.365 traddr: 10.0.0.1 00:31:35.365 eflags: none 00:31:35.365 sectype: none 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:35.365 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.644 Initializing NVMe Controllers 00:31:38.644 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.644 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.644 Initialization complete. Launching workers. 00:31:38.644 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85793, failed: 0 00:31:38.644 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85793, failed to submit 0 00:31:38.644 success 0, unsuccess 85793, failed 0 00:31:38.644 11:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.644 11:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.644 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.925 Initializing NVMe Controllers 00:31:41.925 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:41.925 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:41.925 Initialization complete. Launching workers. 00:31:41.925 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140312, failed: 0 00:31:41.925 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34658, failed to submit 105654 00:31:41.925 success 0, unsuccess 34658, failed 0 00:31:41.925 11:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:41.925 11:42:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:41.925 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.200 Initializing NVMe Controllers 00:31:45.200 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:45.200 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:45.200 Initialization complete. Launching workers. 00:31:45.200 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133207, failed: 0 00:31:45.200 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33350, failed to submit 99857 00:31:45.200 success 0, unsuccess 33350, failed 0 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:45.200 11:42:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:47.726 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:47.726 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:48.292 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:48.551 00:31:48.551 real 0m17.338s 00:31:48.551 user 0m8.560s 00:31:48.551 sys 0m5.087s 00:31:48.551 11:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.551 11:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 ************************************ 00:31:48.551 END TEST kernel_target_abort 00:31:48.551 ************************************ 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.551 rmmod nvme_tcp 00:31:48.551 rmmod nvme_fabrics 00:31:48.551 rmmod nvme_keyring 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 797053 ']' 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 797053 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 797053 ']' 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 797053 00:31:48.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (797053) - No such process 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 797053 is not found' 00:31:48.551 Process with pid 797053 is not found 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:48.551 11:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.147 Waiting for block devices as requested 00:31:51.405 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:51.405 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:51.405 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:51.664 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:51.664 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:51.664 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:51.664 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:51.923 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:51.923 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:51.923 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:52.182 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:52.182 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:52.182 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:52.182 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:52.441 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:52.441 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:52.441 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:52.700 11:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:52.700 11:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:52.700 11:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:52.700 11:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:52.701 11:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.701 11:42:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:52.701 11:42:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.640 11:42:38 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.640 00:31:54.640 real 0m48.339s 00:31:54.640 user 1m9.530s 00:31:54.640 sys 0m15.871s 00:31:54.640 11:42:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.640 11:42:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:54.640 ************************************ 00:31:54.640 END TEST nvmf_abort_qd_sizes 00:31:54.640 ************************************ 00:31:54.640 11:42:38 -- common/autotest_common.sh@1142 -- # return 0 00:31:54.640 11:42:38 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:54.640 11:42:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:54.640 11:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.640 11:42:38 -- common/autotest_common.sh@10 -- # set +x 00:31:54.640 ************************************ 00:31:54.640 START TEST keyring_file 00:31:54.640 ************************************ 00:31:54.640 11:42:38 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:54.900 * Looking for test storage... 00:31:54.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:54.900 11:42:38 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:54.900 11:42:38 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.900 11:42:38 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:54.900 11:42:38 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.900 11:42:38 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.900 11:42:38 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.900 11:42:38 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.901 11:42:38 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.901 11:42:38 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.901 11:42:38 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.901 11:42:38 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.901 11:42:38 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.901 11:42:38 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.901 11:42:38 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:54.901 11:42:38 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zBwzvvzYTJ 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zBwzvvzYTJ 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zBwzvvzYTJ 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zBwzvvzYTJ 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XIJ2YWoSEN 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:54.901 11:42:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XIJ2YWoSEN 00:31:54.901 11:42:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XIJ2YWoSEN 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XIJ2YWoSEN 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@30 -- # tgtpid=806346 00:31:54.901 11:42:38 keyring_file -- keyring/file.sh@32 -- # waitforlisten 806346 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 806346 ']' 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.901 11:42:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.901 [2024-07-15 11:42:38.468768] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:31:54.901 [2024-07-15 11:42:38.468817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806346 ] 00:31:54.901 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.161 [2024-07-15 11:42:38.534291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.161 [2024-07-15 11:42:38.606615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.729 11:42:39 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.729 11:42:39 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:55.729 11:42:39 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:55.729 11:42:39 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.729 11:42:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:55.729 [2024-07-15 11:42:39.295573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.729 null0 00:31:55.988 [2024-07-15 11:42:39.327614] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:55.988 [2024-07-15 11:42:39.327878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:55.988 [2024-07-15 11:42:39.335633] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.988 11:42:39 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.988 11:42:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:55.988 [2024-07-15 11:42:39.347663] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:55.988 request: 00:31:55.988 { 00:31:55.988 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.988 "secure_channel": false, 00:31:55.989 "listen_address": { 00:31:55.989 "trtype": "tcp", 00:31:55.989 "traddr": "127.0.0.1", 00:31:55.989 "trsvcid": "4420" 00:31:55.989 }, 00:31:55.989 "method": "nvmf_subsystem_add_listener", 00:31:55.989 "req_id": 1 00:31:55.989 } 00:31:55.989 Got JSON-RPC error response 00:31:55.989 response: 00:31:55.989 { 00:31:55.989 "code": -32602, 00:31:55.989 "message": "Invalid parameters" 00:31:55.989 } 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:55.989 11:42:39 keyring_file -- keyring/file.sh@46 -- # bperfpid=806440 00:31:55.989 11:42:39 keyring_file -- keyring/file.sh@48 -- # waitforlisten 806440 /var/tmp/bperf.sock 00:31:55.989 11:42:39 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 806440 ']' 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:55.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.989 11:42:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:55.989 [2024-07-15 11:42:39.400968] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:31:55.989 [2024-07-15 11:42:39.401011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806440 ] 00:31:55.989 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.989 [2024-07-15 11:42:39.469663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.989 [2024-07-15 11:42:39.548722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.926 11:42:40 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:56.926 11:42:40 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:56.926 11:42:40 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:31:56.926 11:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:31:56.926 11:42:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XIJ2YWoSEN 00:31:56.926 11:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XIJ2YWoSEN 00:31:57.185 11:42:40 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:57.185 11:42:40 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:57.185 11:42:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.185 11:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.185 11:42:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:57.445 11:42:40 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.zBwzvvzYTJ == \/\t\m\p\/\t\m\p\.\z\B\w\z\v\v\z\Y\T\J ]] 00:31:57.445 11:42:40 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:57.445 11:42:40 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.445 11:42:40 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XIJ2YWoSEN == \/\t\m\p\/\t\m\p\.\X\I\J\2\Y\W\o\S\E\N ]] 00:31:57.445 11:42:40 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:57.445 11:42:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.704 11:42:41 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:57.704 11:42:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:57.704 11:42:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:57.704 11:42:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:57.704 11:42:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:57.704 11:42:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.704 11:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.963 11:42:41 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:57.963 11:42:41 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.963 11:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.963 [2024-07-15 11:42:41.474313] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:57.963 nvme0n1 00:31:58.222 11:42:41 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.222 11:42:41 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:58.222 11:42:41 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.222 11:42:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:58.481 11:42:41 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:58.481 11:42:41 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:58.481 Running I/O for 1 seconds... 00:31:59.857 00:31:59.857 Latency(us) 00:31:59.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.857 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:59.857 nvme0n1 : 1.00 16204.35 63.30 0.00 0.00 7881.14 3960.65 14702.86 00:31:59.857 =================================================================================================================== 00:31:59.857 Total : 16204.35 63.30 0.00 0.00 7881.14 3960.65 14702.86 00:31:59.857 0 00:31:59.857 11:42:43 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:59.857 11:42:43 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.857 11:42:43 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:59.857 11:42:43 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.857 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.116 11:42:43 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:00.116 11:42:43 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:00.117 11:42:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.117 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:00.376 [2024-07-15 11:42:43.751293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:00.376 [2024-07-15 11:42:43.751702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d3770 (107): Transport endpoint is not connected 00:32:00.376 [2024-07-15 11:42:43.752696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d3770 (9): Bad file descriptor 00:32:00.376 [2024-07-15 11:42:43.753697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:00.376 [2024-07-15 11:42:43.753708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:00.376 [2024-07-15 11:42:43.753714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:00.376 request: 00:32:00.376 { 00:32:00.376 "name": "nvme0", 00:32:00.376 "trtype": "tcp", 00:32:00.376 "traddr": "127.0.0.1", 00:32:00.376 "adrfam": "ipv4", 00:32:00.376 "trsvcid": "4420", 00:32:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.376 "prchk_reftag": false, 00:32:00.376 "prchk_guard": false, 00:32:00.376 "hdgst": false, 00:32:00.376 "ddgst": false, 00:32:00.376 "psk": "key1", 00:32:00.376 "method": "bdev_nvme_attach_controller", 00:32:00.376 "req_id": 1 00:32:00.376 } 00:32:00.376 Got JSON-RPC error response 00:32:00.376 response: 00:32:00.376 { 00:32:00.376 "code": -5, 00:32:00.376 "message": "Input/output error" 00:32:00.376 } 00:32:00.376 11:42:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:00.376 11:42:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:00.376 11:42:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:00.376 11:42:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:00.376 11:42:43 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.376 11:42:43 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:00.376 11:42:43 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.376 11:42:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.635 11:42:44 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:00.635 11:42:44 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:00.635 11:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:00.895 11:42:44 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:00.895 11:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:01.154 11:42:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:01.154 11:42:44 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:01.154 11:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.154 11:42:44 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:01.154 11:42:44 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.zBwzvvzYTJ 00:32:01.154 11:42:44 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:01.154 11:42:44 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.154 11:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.413 [2024-07-15 11:42:44.825834] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zBwzvvzYTJ': 0100660 00:32:01.413 [2024-07-15 11:42:44.825857] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:01.413 request: 00:32:01.413 { 00:32:01.413 "name": "key0", 00:32:01.413 "path": "/tmp/tmp.zBwzvvzYTJ", 00:32:01.413 "method": "keyring_file_add_key", 00:32:01.413 "req_id": 1 00:32:01.413 } 00:32:01.413 Got JSON-RPC error response 00:32:01.413 response: 00:32:01.413 { 00:32:01.413 "code": -1, 00:32:01.413 "message": "Operation not permitted" 00:32:01.413 } 00:32:01.413 11:42:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:01.413 11:42:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:01.413 11:42:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:01.413 11:42:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:01.413 11:42:44 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.zBwzvvzYTJ 00:32:01.413 11:42:44 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.413 11:42:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zBwzvvzYTJ 00:32:01.672 11:42:45 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.zBwzvvzYTJ 00:32:01.672 11:42:45 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.672 11:42:45 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:01.672 11:42:45 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:01.672 11:42:45 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.672 11:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.932 [2024-07-15 11:42:45.407390] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zBwzvvzYTJ': No such file or directory 00:32:01.932 [2024-07-15 11:42:45.407427] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:01.932 [2024-07-15 11:42:45.407447] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:01.932 [2024-07-15 11:42:45.407453] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:01.932 [2024-07-15 11:42:45.407459] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:01.932 request: 00:32:01.932 { 00:32:01.932 "name": "nvme0", 00:32:01.932 "trtype": "tcp", 00:32:01.932 "traddr": "127.0.0.1", 00:32:01.932 "adrfam": "ipv4", 00:32:01.932 "trsvcid": "4420", 00:32:01.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.932 "prchk_reftag": false, 00:32:01.932 "prchk_guard": false, 00:32:01.932 "hdgst": false, 00:32:01.932 "ddgst": false, 00:32:01.932 "psk": "key0", 00:32:01.932 "method": "bdev_nvme_attach_controller", 00:32:01.932 "req_id": 1 00:32:01.932 } 00:32:01.932 Got JSON-RPC error response 00:32:01.932 response: 00:32:01.932 { 00:32:01.932 "code": -19, 00:32:01.932 "message": "No such device" 00:32:01.932 } 00:32:01.932 11:42:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:01.932 11:42:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:01.932 11:42:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:01.932 11:42:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:01.932 11:42:45 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:01.932 11:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:02.192 11:42:45 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8KmR1LM0Uj 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:02.192 11:42:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8KmR1LM0Uj 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8KmR1LM0Uj 00:32:02.192 11:42:45 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8KmR1LM0Uj 00:32:02.192 11:42:45 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8KmR1LM0Uj 00:32:02.192 11:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8KmR1LM0Uj 00:32:02.451 11:42:45 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.451 11:42:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:02.451 nvme0n1 00:32:02.709 11:42:46 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:02.709 11:42:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.709 11:42:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.710 11:42:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.710 11:42:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.710 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.710 11:42:46 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:02.710 11:42:46 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:02.710 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:02.968 11:42:46 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:02.968 11:42:46 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:02.968 11:42:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.968 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.968 11:42:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.228 11:42:46 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:03.228 11:42:46 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.228 11:42:46 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:03.228 11:42:46 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:03.228 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:03.486 11:42:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:03.486 11:42:46 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:03.486 11:42:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.745 11:42:47 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:03.745 11:42:47 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8KmR1LM0Uj 00:32:03.745 11:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8KmR1LM0Uj 00:32:03.745 11:42:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XIJ2YWoSEN 00:32:03.745 11:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XIJ2YWoSEN 00:32:04.004 11:42:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.004 11:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.263 nvme0n1 00:32:04.263 11:42:47 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:04.263 11:42:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:04.523 11:42:47 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:04.523 "subsystems": [ 00:32:04.523 { 00:32:04.523 "subsystem": "keyring", 00:32:04.523 "config": [ 00:32:04.523 { 00:32:04.523 "method": "keyring_file_add_key", 00:32:04.523 "params": { 00:32:04.523 "name": "key0", 00:32:04.523 "path": "/tmp/tmp.8KmR1LM0Uj" 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "keyring_file_add_key", 00:32:04.523 "params": { 00:32:04.523 "name": "key1", 00:32:04.523 "path": "/tmp/tmp.XIJ2YWoSEN" 00:32:04.523 } 00:32:04.523 } 00:32:04.523 ] 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "subsystem": "iobuf", 00:32:04.523 "config": [ 00:32:04.523 { 00:32:04.523 "method": "iobuf_set_options", 00:32:04.523 "params": { 00:32:04.523 "small_pool_count": 8192, 00:32:04.523 "large_pool_count": 1024, 00:32:04.523 "small_bufsize": 8192, 00:32:04.523 "large_bufsize": 135168 00:32:04.523 } 00:32:04.523 } 00:32:04.523 ] 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "subsystem": "sock", 00:32:04.523 "config": [ 00:32:04.523 { 00:32:04.523 "method": "sock_set_default_impl", 00:32:04.523 "params": { 00:32:04.523 "impl_name": "posix" 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "sock_impl_set_options", 00:32:04.523 "params": { 00:32:04.523 "impl_name": "ssl", 00:32:04.523 "recv_buf_size": 4096, 00:32:04.523 "send_buf_size": 4096, 00:32:04.523 "enable_recv_pipe": true, 00:32:04.523 "enable_quickack": false, 00:32:04.523 "enable_placement_id": 0, 00:32:04.523 "enable_zerocopy_send_server": true, 00:32:04.523 "enable_zerocopy_send_client": false, 00:32:04.523 "zerocopy_threshold": 0, 00:32:04.523 "tls_version": 0, 00:32:04.523 "enable_ktls": false 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "sock_impl_set_options", 00:32:04.523 "params": { 00:32:04.523 "impl_name": "posix", 00:32:04.523 "recv_buf_size": 2097152, 00:32:04.523 "send_buf_size": 2097152, 00:32:04.523 "enable_recv_pipe": true, 00:32:04.523 "enable_quickack": false, 00:32:04.523 "enable_placement_id": 0, 00:32:04.523 "enable_zerocopy_send_server": true, 00:32:04.523 "enable_zerocopy_send_client": false, 00:32:04.523 "zerocopy_threshold": 0, 00:32:04.523 "tls_version": 0, 00:32:04.523 "enable_ktls": false 00:32:04.523 } 00:32:04.523 } 00:32:04.523 ] 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "subsystem": "vmd", 00:32:04.523 "config": [] 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "subsystem": "accel", 00:32:04.523 "config": [ 00:32:04.523 { 00:32:04.523 "method": "accel_set_options", 00:32:04.523 "params": { 00:32:04.523 "small_cache_size": 128, 00:32:04.523 "large_cache_size": 16, 00:32:04.523 "task_count": 2048, 00:32:04.523 "sequence_count": 2048, 00:32:04.523 "buf_count": 2048 00:32:04.523 } 00:32:04.523 } 00:32:04.523 ] 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "subsystem": "bdev", 00:32:04.523 "config": [ 00:32:04.523 { 00:32:04.523 "method": "bdev_set_options", 00:32:04.523 "params": { 00:32:04.523 "bdev_io_pool_size": 65535, 00:32:04.523 "bdev_io_cache_size": 256, 00:32:04.523 "bdev_auto_examine": true, 00:32:04.523 "iobuf_small_cache_size": 128, 00:32:04.523 "iobuf_large_cache_size": 16 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "bdev_raid_set_options", 00:32:04.523 "params": { 00:32:04.523 "process_window_size_kb": 1024 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "bdev_iscsi_set_options", 00:32:04.523 "params": { 00:32:04.523 "timeout_sec": 30 00:32:04.523 } 00:32:04.523 }, 00:32:04.523 { 00:32:04.523 "method": "bdev_nvme_set_options", 00:32:04.523 "params": { 00:32:04.523 "action_on_timeout": "none", 00:32:04.523 "timeout_us": 0, 00:32:04.523 "timeout_admin_us": 0, 00:32:04.523 "keep_alive_timeout_ms": 10000, 00:32:04.523 "arbitration_burst": 0, 00:32:04.523 "low_priority_weight": 0, 00:32:04.523 "medium_priority_weight": 0, 00:32:04.523 "high_priority_weight": 0, 00:32:04.524 "nvme_adminq_poll_period_us": 10000, 00:32:04.524 "nvme_ioq_poll_period_us": 0, 00:32:04.524 "io_queue_requests": 512, 00:32:04.524 "delay_cmd_submit": true, 00:32:04.524 "transport_retry_count": 4, 00:32:04.524 "bdev_retry_count": 3, 00:32:04.524 "transport_ack_timeout": 0, 00:32:04.524 "ctrlr_loss_timeout_sec": 0, 00:32:04.524 "reconnect_delay_sec": 0, 00:32:04.524 "fast_io_fail_timeout_sec": 0, 00:32:04.524 "disable_auto_failback": false, 00:32:04.524 "generate_uuids": false, 00:32:04.524 "transport_tos": 0, 00:32:04.524 "nvme_error_stat": false, 00:32:04.524 "rdma_srq_size": 0, 00:32:04.524 "io_path_stat": false, 00:32:04.524 "allow_accel_sequence": false, 00:32:04.524 "rdma_max_cq_size": 0, 00:32:04.524 "rdma_cm_event_timeout_ms": 0, 00:32:04.524 "dhchap_digests": [ 00:32:04.524 "sha256", 00:32:04.524 "sha384", 00:32:04.524 "sha512" 00:32:04.524 ], 00:32:04.524 "dhchap_dhgroups": [ 00:32:04.524 "null", 00:32:04.524 "ffdhe2048", 00:32:04.524 "ffdhe3072", 00:32:04.524 "ffdhe4096", 00:32:04.524 "ffdhe6144", 00:32:04.524 "ffdhe8192" 00:32:04.524 ] 00:32:04.524 } 00:32:04.524 }, 00:32:04.524 { 00:32:04.524 "method": "bdev_nvme_attach_controller", 00:32:04.524 "params": { 00:32:04.524 "name": "nvme0", 00:32:04.524 "trtype": "TCP", 00:32:04.524 "adrfam": "IPv4", 00:32:04.524 "traddr": "127.0.0.1", 00:32:04.524 "trsvcid": "4420", 00:32:04.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.524 "prchk_reftag": false, 00:32:04.524 "prchk_guard": false, 00:32:04.524 "ctrlr_loss_timeout_sec": 0, 00:32:04.524 "reconnect_delay_sec": 0, 00:32:04.524 "fast_io_fail_timeout_sec": 0, 00:32:04.524 "psk": "key0", 00:32:04.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.524 "hdgst": false, 00:32:04.524 "ddgst": false 00:32:04.524 } 00:32:04.524 }, 00:32:04.524 { 00:32:04.524 "method": "bdev_nvme_set_hotplug", 00:32:04.524 "params": { 00:32:04.524 "period_us": 100000, 00:32:04.524 "enable": false 00:32:04.524 } 00:32:04.524 }, 00:32:04.524 { 00:32:04.524 "method": "bdev_wait_for_examine" 00:32:04.524 } 00:32:04.524 ] 00:32:04.524 }, 00:32:04.524 { 00:32:04.524 "subsystem": "nbd", 00:32:04.524 "config": [] 00:32:04.524 } 00:32:04.524 ] 00:32:04.524 }' 00:32:04.524 11:42:47 keyring_file -- keyring/file.sh@114 -- # killprocess 806440 00:32:04.524 11:42:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 806440 ']' 00:32:04.524 11:42:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 806440 00:32:04.524 11:42:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:04.524 11:42:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:04.524 11:42:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 806440 00:32:04.524 11:42:48 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:04.524 11:42:48 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:04.524 11:42:48 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 806440' 00:32:04.524 killing process with pid 806440 00:32:04.524 11:42:48 keyring_file -- common/autotest_common.sh@967 -- # kill 806440 00:32:04.524 Received shutdown signal, test time was about 1.000000 seconds 00:32:04.524 00:32:04.524 Latency(us) 00:32:04.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.524 =================================================================================================================== 00:32:04.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:04.524 11:42:48 keyring_file -- common/autotest_common.sh@972 -- # wait 806440 00:32:04.784 11:42:48 keyring_file -- keyring/file.sh@117 -- # bperfpid=808044 00:32:04.784 11:42:48 keyring_file -- keyring/file.sh@119 -- # waitforlisten 808044 /var/tmp/bperf.sock 00:32:04.784 11:42:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 808044 ']' 00:32:04.784 11:42:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.784 11:42:48 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:04.784 11:42:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.784 11:42:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.784 11:42:48 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:04.784 "subsystems": [ 00:32:04.784 { 00:32:04.784 "subsystem": "keyring", 00:32:04.784 "config": [ 00:32:04.784 { 00:32:04.784 "method": "keyring_file_add_key", 00:32:04.784 "params": { 00:32:04.784 "name": "key0", 00:32:04.784 "path": "/tmp/tmp.8KmR1LM0Uj" 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "keyring_file_add_key", 00:32:04.784 "params": { 00:32:04.784 "name": "key1", 00:32:04.784 "path": "/tmp/tmp.XIJ2YWoSEN" 00:32:04.784 } 00:32:04.784 } 00:32:04.784 ] 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "subsystem": "iobuf", 00:32:04.784 "config": [ 00:32:04.784 { 00:32:04.784 "method": "iobuf_set_options", 00:32:04.784 "params": { 00:32:04.784 "small_pool_count": 8192, 00:32:04.784 "large_pool_count": 1024, 00:32:04.784 "small_bufsize": 8192, 00:32:04.784 "large_bufsize": 135168 00:32:04.784 } 00:32:04.784 } 00:32:04.784 ] 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "subsystem": "sock", 00:32:04.784 "config": [ 00:32:04.784 { 00:32:04.784 "method": "sock_set_default_impl", 00:32:04.784 "params": { 00:32:04.784 "impl_name": "posix" 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "sock_impl_set_options", 00:32:04.784 "params": { 00:32:04.784 "impl_name": "ssl", 00:32:04.784 "recv_buf_size": 4096, 00:32:04.784 "send_buf_size": 4096, 00:32:04.784 "enable_recv_pipe": true, 00:32:04.784 "enable_quickack": false, 00:32:04.784 "enable_placement_id": 0, 00:32:04.784 "enable_zerocopy_send_server": true, 00:32:04.784 "enable_zerocopy_send_client": false, 00:32:04.784 "zerocopy_threshold": 0, 00:32:04.784 "tls_version": 0, 00:32:04.784 "enable_ktls": false 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "sock_impl_set_options", 00:32:04.784 "params": { 00:32:04.784 "impl_name": "posix", 00:32:04.784 "recv_buf_size": 2097152, 00:32:04.784 "send_buf_size": 2097152, 00:32:04.784 "enable_recv_pipe": true, 00:32:04.784 "enable_quickack": false, 00:32:04.784 "enable_placement_id": 0, 00:32:04.784 "enable_zerocopy_send_server": true, 00:32:04.784 "enable_zerocopy_send_client": false, 00:32:04.784 "zerocopy_threshold": 0, 00:32:04.784 "tls_version": 0, 00:32:04.784 "enable_ktls": false 00:32:04.784 } 00:32:04.784 } 00:32:04.784 ] 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "subsystem": "vmd", 00:32:04.784 "config": [] 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "subsystem": "accel", 00:32:04.784 "config": [ 00:32:04.784 { 00:32:04.784 "method": "accel_set_options", 00:32:04.784 "params": { 00:32:04.784 "small_cache_size": 128, 00:32:04.784 "large_cache_size": 16, 00:32:04.784 "task_count": 2048, 00:32:04.784 "sequence_count": 2048, 00:32:04.784 "buf_count": 2048 00:32:04.784 } 00:32:04.784 } 00:32:04.784 ] 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "subsystem": "bdev", 00:32:04.784 "config": [ 00:32:04.784 { 00:32:04.784 "method": "bdev_set_options", 00:32:04.784 "params": { 00:32:04.784 "bdev_io_pool_size": 65535, 00:32:04.784 "bdev_io_cache_size": 256, 00:32:04.784 "bdev_auto_examine": true, 00:32:04.784 "iobuf_small_cache_size": 128, 00:32:04.784 "iobuf_large_cache_size": 16 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "bdev_raid_set_options", 00:32:04.784 "params": { 00:32:04.784 "process_window_size_kb": 1024 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "bdev_iscsi_set_options", 00:32:04.784 "params": { 00:32:04.784 "timeout_sec": 30 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "bdev_nvme_set_options", 00:32:04.784 "params": { 00:32:04.784 "action_on_timeout": "none", 00:32:04.784 "timeout_us": 0, 00:32:04.784 "timeout_admin_us": 0, 00:32:04.784 "keep_alive_timeout_ms": 10000, 00:32:04.784 "arbitration_burst": 0, 00:32:04.784 "low_priority_weight": 0, 00:32:04.784 "medium_priority_weight": 0, 00:32:04.784 "high_priority_weight": 0, 00:32:04.784 "nvme_adminq_poll_period_us": 10000, 00:32:04.784 "nvme_ioq_poll_period_us": 0, 00:32:04.784 "io_queue_requests": 512, 00:32:04.784 "delay_cmd_submit": true, 00:32:04.784 "transport_retry_count": 4, 00:32:04.784 "bdev_retry_count": 3, 00:32:04.784 "transport_ack_timeout": 0, 00:32:04.784 "ctrlr_loss_timeout_sec": 0, 00:32:04.784 "reconnect_delay_sec": 0, 00:32:04.784 "fast_io_fail_timeout_sec": 0, 00:32:04.784 "disable_auto_failback": false, 00:32:04.784 "generate_uuids": false, 00:32:04.784 "transport_tos": 0, 00:32:04.784 "nvme_error_stat": false, 00:32:04.784 "rdma_srq_size": 0, 00:32:04.784 "io_path_stat": false, 00:32:04.784 "allow_accel_sequence": false, 00:32:04.784 "rdma_max_cq_size": 0, 00:32:04.784 "rdma_cm_event_timeout_ms": 0, 00:32:04.784 "dhchap_digests": [ 00:32:04.784 "sha256", 00:32:04.784 "sha384", 00:32:04.784 "sha512" 00:32:04.784 ], 00:32:04.784 "dhchap_dhgroups": [ 00:32:04.784 "null", 00:32:04.784 "ffdhe2048", 00:32:04.784 "ffdhe3072", 00:32:04.784 "ffdhe4096", 00:32:04.784 "ffdhe6144", 00:32:04.784 "ffdhe8192" 00:32:04.784 ] 00:32:04.784 } 00:32:04.784 }, 00:32:04.784 { 00:32:04.784 "method": "bdev_nvme_attach_controller", 00:32:04.784 "params": { 00:32:04.784 "name": "nvme0", 00:32:04.784 "trtype": "TCP", 00:32:04.784 "adrfam": "IPv4", 00:32:04.784 "traddr": "127.0.0.1", 00:32:04.784 "trsvcid": "4420", 00:32:04.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.785 "prchk_reftag": false, 00:32:04.785 "prchk_guard": false, 00:32:04.785 "ctrlr_loss_timeout_sec": 0, 00:32:04.785 "reconnect_delay_sec": 0, 00:32:04.785 "fast_io_fail_timeout_sec": 0, 00:32:04.785 "psk": "key0", 00:32:04.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.785 "hdgst": false, 00:32:04.785 "ddgst": false 00:32:04.785 } 00:32:04.785 }, 00:32:04.785 { 00:32:04.785 "method": "bdev_nvme_set_hotplug", 00:32:04.785 "params": { 00:32:04.785 "period_us": 100000, 00:32:04.785 "enable": false 00:32:04.785 } 00:32:04.785 }, 00:32:04.785 { 00:32:04.785 "method": "bdev_wait_for_examine" 00:32:04.785 } 00:32:04.785 ] 00:32:04.785 }, 00:32:04.785 { 00:32:04.785 "subsystem": "nbd", 00:32:04.785 "config": [] 00:32:04.785 } 00:32:04.785 ] 00:32:04.785 }' 00:32:04.785 11:42:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.785 11:42:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.785 [2024-07-15 11:42:48.252517] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:32:04.785 [2024-07-15 11:42:48.252565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808044 ] 00:32:04.785 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.785 [2024-07-15 11:42:48.320505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.044 [2024-07-15 11:42:48.401107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.044 [2024-07-15 11:42:48.559404] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:05.612 11:42:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:05.612 11:42:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:05.612 11:42:49 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:05.612 11:42:49 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:05.612 11:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.871 11:42:49 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:05.871 11:42:49 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.871 11:42:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:05.871 11:42:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.871 11:42:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:06.130 11:42:49 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:06.130 11:42:49 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:06.130 11:42:49 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:06.130 11:42:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:06.389 11:42:49 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:06.389 11:42:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:06.389 11:42:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8KmR1LM0Uj /tmp/tmp.XIJ2YWoSEN 00:32:06.389 11:42:49 keyring_file -- keyring/file.sh@20 -- # killprocess 808044 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 808044 ']' 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@952 -- # kill -0 808044 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 808044 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 808044' 00:32:06.389 killing process with pid 808044 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@967 -- # kill 808044 00:32:06.389 Received shutdown signal, test time was about 1.000000 seconds 00:32:06.389 00:32:06.389 Latency(us) 00:32:06.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.389 =================================================================================================================== 00:32:06.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:06.389 11:42:49 keyring_file -- common/autotest_common.sh@972 -- # wait 808044 00:32:06.648 11:42:50 keyring_file -- keyring/file.sh@21 -- # killprocess 806346 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 806346 ']' 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 806346 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 806346 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 806346' 00:32:06.648 killing process with pid 806346 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@967 -- # kill 806346 00:32:06.648 [2024-07-15 11:42:50.065691] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:06.648 11:42:50 keyring_file -- common/autotest_common.sh@972 -- # wait 806346 00:32:06.907 00:32:06.907 real 0m12.171s 00:32:06.907 user 0m29.048s 00:32:06.907 sys 0m2.802s 00:32:06.907 11:42:50 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:06.907 11:42:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.907 ************************************ 00:32:06.907 END TEST keyring_file 00:32:06.907 ************************************ 00:32:06.907 11:42:50 -- common/autotest_common.sh@1142 -- # return 0 00:32:06.907 11:42:50 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:06.907 11:42:50 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:06.907 11:42:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:06.907 11:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.907 11:42:50 -- common/autotest_common.sh@10 -- # set +x 00:32:06.907 ************************************ 00:32:06.907 START TEST keyring_linux 00:32:06.907 ************************************ 00:32:06.907 11:42:50 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:07.166 * Looking for test storage... 00:32:07.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:07.166 11:42:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:07.166 11:42:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:07.166 11:42:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.167 11:42:50 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.167 11:42:50 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.167 11:42:50 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.167 11:42:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.167 11:42:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.167 11:42:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.167 11:42:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:07.167 11:42:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:07.167 /tmp/:spdk-test:key0 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:07.167 11:42:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:07.167 11:42:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:07.167 /tmp/:spdk-test:key1 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=808426 00:32:07.167 11:42:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 808426 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 808426 ']' 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.167 11:42:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:07.167 [2024-07-15 11:42:50.679041] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:32:07.167 [2024-07-15 11:42:50.679089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808426 ] 00:32:07.167 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.167 [2024-07-15 11:42:50.746476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.426 [2024-07-15 11:42:50.827188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.992 11:42:51 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:07.992 11:42:51 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:07.992 11:42:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:07.992 11:42:51 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.992 11:42:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:07.992 [2024-07-15 11:42:51.495645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.992 null0 00:32:07.993 [2024-07-15 11:42:51.527698] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:07.993 [2024-07-15 11:42:51.528040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.993 11:42:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:07.993 269565746 00:32:07.993 11:42:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:07.993 845139097 00:32:07.993 11:42:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=808659 00:32:07.993 11:42:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 808659 /var/tmp/bperf.sock 00:32:07.993 11:42:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 808659 ']' 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.993 11:42:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:08.251 [2024-07-15 11:42:51.599881] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:32:08.251 [2024-07-15 11:42:51.599922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808659 ] 00:32:08.251 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.251 [2024-07-15 11:42:51.667422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.251 [2024-07-15 11:42:51.739881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.828 11:42:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.828 11:42:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:08.828 11:42:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:08.828 11:42:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:09.123 11:42:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:09.123 11:42:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:09.381 11:42:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:09.381 11:42:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:09.381 [2024-07-15 11:42:52.955558] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:09.638 nvme0n1 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:09.638 11:42:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:09.638 11:42:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:09.638 11:42:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.638 11:42:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:09.638 11:42:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@25 -- # sn=269565746 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 269565746 == \2\6\9\5\6\5\7\4\6 ]] 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 269565746 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:09.895 11:42:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.153 Running I/O for 1 seconds... 00:32:11.088 00:32:11.089 Latency(us) 00:32:11.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:11.089 nvme0n1 : 1.01 17108.65 66.83 0.00 0.00 7450.61 6069.20 14303.94 00:32:11.089 =================================================================================================================== 00:32:11.089 Total : 17108.65 66.83 0.00 0.00 7450.61 6069.20 14303.94 00:32:11.089 0 00:32:11.089 11:42:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:11.089 11:42:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:11.347 11:42:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:11.347 11:42:54 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.347 11:42:54 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:11.347 11:42:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:11.605 [2024-07-15 11:42:55.068134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:11.605 [2024-07-15 11:42:55.068867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4fd0 (107): Transport endpoint is not connected 00:32:11.605 [2024-07-15 11:42:55.069862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4fd0 (9): Bad file descriptor 00:32:11.605 [2024-07-15 11:42:55.070864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.605 [2024-07-15 11:42:55.070872] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:11.605 [2024-07-15 11:42:55.070878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.605 request: 00:32:11.605 { 00:32:11.605 "name": "nvme0", 00:32:11.605 "trtype": "tcp", 00:32:11.605 "traddr": "127.0.0.1", 00:32:11.605 "adrfam": "ipv4", 00:32:11.605 "trsvcid": "4420", 00:32:11.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:11.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:11.605 "prchk_reftag": false, 00:32:11.605 "prchk_guard": false, 00:32:11.605 "hdgst": false, 00:32:11.605 "ddgst": false, 00:32:11.605 "psk": ":spdk-test:key1", 00:32:11.605 "method": "bdev_nvme_attach_controller", 00:32:11.605 "req_id": 1 00:32:11.605 } 00:32:11.605 Got JSON-RPC error response 00:32:11.605 response: 00:32:11.605 { 00:32:11.605 "code": -5, 00:32:11.605 "message": "Input/output error" 00:32:11.605 } 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@33 -- # sn=269565746 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 269565746 00:32:11.605 1 links removed 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@33 -- # sn=845139097 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 845139097 00:32:11.605 1 links removed 00:32:11.605 11:42:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 808659 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 808659 ']' 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 808659 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 808659 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 808659' 00:32:11.605 killing process with pid 808659 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 808659 00:32:11.605 Received shutdown signal, test time was about 1.000000 seconds 00:32:11.605 00:32:11.605 Latency(us) 00:32:11.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.605 =================================================================================================================== 00:32:11.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.605 11:42:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 808659 00:32:11.864 11:42:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 808426 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 808426 ']' 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 808426 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 808426 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 808426' 00:32:11.864 killing process with pid 808426 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 808426 00:32:11.864 11:42:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 808426 00:32:12.123 00:32:12.123 real 0m5.239s 00:32:12.123 user 0m9.308s 00:32:12.123 sys 0m1.562s 00:32:12.123 11:42:55 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:12.123 11:42:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:12.123 ************************************ 00:32:12.123 END TEST keyring_linux 00:32:12.123 ************************************ 00:32:12.381 11:42:55 -- common/autotest_common.sh@1142 -- # return 0 00:32:12.381 11:42:55 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:12.381 11:42:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:12.381 11:42:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:12.381 11:42:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:12.381 11:42:55 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:12.381 11:42:55 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:12.381 11:42:55 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:12.381 11:42:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:12.381 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:32:12.381 11:42:55 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:12.381 11:42:55 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:12.381 11:42:55 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:12.381 11:42:55 -- common/autotest_common.sh@10 -- # set +x 00:32:17.657 INFO: APP EXITING 00:32:17.657 INFO: killing all VMs 00:32:17.657 INFO: killing vhost app 00:32:17.657 INFO: EXIT DONE 00:32:20.191 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:20.191 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:20.191 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:22.727 Cleaning 00:32:22.727 Removing: /var/run/dpdk/spdk0/config 00:32:22.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:22.727 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:22.728 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:22.987 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:22.987 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:22.987 Removing: /var/run/dpdk/spdk1/config 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:22.987 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:22.987 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:22.987 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:22.987 Removing: /var/run/dpdk/spdk2/config 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:22.987 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:22.987 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:22.987 Removing: /var/run/dpdk/spdk3/config 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:22.987 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:22.987 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:22.987 Removing: /var/run/dpdk/spdk4/config 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:22.987 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:22.987 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:22.987 Removing: /dev/shm/bdev_svc_trace.1 00:32:22.987 Removing: /dev/shm/nvmf_trace.0 00:32:22.987 Removing: /dev/shm/spdk_tgt_trace.pid419832 00:32:22.987 Removing: /var/run/dpdk/spdk0 00:32:22.987 Removing: /var/run/dpdk/spdk1 00:32:22.988 Removing: /var/run/dpdk/spdk2 00:32:22.988 Removing: /var/run/dpdk/spdk3 00:32:22.988 Removing: /var/run/dpdk/spdk4 00:32:22.988 Removing: /var/run/dpdk/spdk_pid417697 00:32:22.988 Removing: /var/run/dpdk/spdk_pid418764 00:32:22.988 Removing: /var/run/dpdk/spdk_pid419832 00:32:22.988 Removing: /var/run/dpdk/spdk_pid420467 00:32:22.988 Removing: /var/run/dpdk/spdk_pid421418 00:32:22.988 Removing: /var/run/dpdk/spdk_pid421654 00:32:23.247 Removing: /var/run/dpdk/spdk_pid422625 00:32:23.248 Removing: /var/run/dpdk/spdk_pid422858 00:32:23.248 Removing: /var/run/dpdk/spdk_pid422993 00:32:23.248 Removing: /var/run/dpdk/spdk_pid424551 00:32:23.248 Removing: /var/run/dpdk/spdk_pid425834 00:32:23.248 Removing: /var/run/dpdk/spdk_pid426192 00:32:23.248 Removing: /var/run/dpdk/spdk_pid426321 00:32:23.248 Removing: /var/run/dpdk/spdk_pid426735 00:32:23.248 Removing: /var/run/dpdk/spdk_pid427103 00:32:23.248 Removing: /var/run/dpdk/spdk_pid427310 00:32:23.248 Removing: /var/run/dpdk/spdk_pid427513 00:32:23.248 Removing: /var/run/dpdk/spdk_pid427807 00:32:23.248 Removing: /var/run/dpdk/spdk_pid428652 00:32:23.248 Removing: /var/run/dpdk/spdk_pid431645 00:32:23.248 Removing: /var/run/dpdk/spdk_pid431908 00:32:23.248 Removing: /var/run/dpdk/spdk_pid432185 00:32:23.248 Removing: /var/run/dpdk/spdk_pid432401 00:32:23.248 Removing: /var/run/dpdk/spdk_pid432894 00:32:23.248 Removing: /var/run/dpdk/spdk_pid432971 00:32:23.248 Removing: /var/run/dpdk/spdk_pid433397 00:32:23.248 Removing: /var/run/dpdk/spdk_pid433625 00:32:23.248 Removing: /var/run/dpdk/spdk_pid433893 00:32:23.248 Removing: /var/run/dpdk/spdk_pid433902 00:32:23.248 Removing: /var/run/dpdk/spdk_pid434165 00:32:23.248 Removing: /var/run/dpdk/spdk_pid434392 00:32:23.248 Removing: /var/run/dpdk/spdk_pid434761 00:32:23.248 Removing: /var/run/dpdk/spdk_pid434987 00:32:23.248 Removing: /var/run/dpdk/spdk_pid435288 00:32:23.248 Removing: /var/run/dpdk/spdk_pid435558 00:32:23.248 Removing: /var/run/dpdk/spdk_pid435778 00:32:23.248 Removing: /var/run/dpdk/spdk_pid435850 00:32:23.248 Removing: /var/run/dpdk/spdk_pid436098 00:32:23.248 Removing: /var/run/dpdk/spdk_pid436348 00:32:23.248 Removing: /var/run/dpdk/spdk_pid436601 00:32:23.248 Removing: /var/run/dpdk/spdk_pid436884 00:32:23.248 Removing: /var/run/dpdk/spdk_pid437218 00:32:23.248 Removing: /var/run/dpdk/spdk_pid437472 00:32:23.248 Removing: /var/run/dpdk/spdk_pid437723 00:32:23.248 Removing: /var/run/dpdk/spdk_pid437974 00:32:23.248 Removing: /var/run/dpdk/spdk_pid438225 00:32:23.248 Removing: /var/run/dpdk/spdk_pid438717 00:32:23.248 Removing: /var/run/dpdk/spdk_pid439132 00:32:23.248 Removing: /var/run/dpdk/spdk_pid439400 00:32:23.248 Removing: /var/run/dpdk/spdk_pid439676 00:32:23.248 Removing: /var/run/dpdk/spdk_pid439940 00:32:23.248 Removing: /var/run/dpdk/spdk_pid440215 00:32:23.248 Removing: /var/run/dpdk/spdk_pid440491 00:32:23.248 Removing: /var/run/dpdk/spdk_pid440772 00:32:23.248 Removing: /var/run/dpdk/spdk_pid441065 00:32:23.248 Removing: /var/run/dpdk/spdk_pid441331 00:32:23.248 Removing: /var/run/dpdk/spdk_pid441585 00:32:23.248 Removing: /var/run/dpdk/spdk_pid441655 00:32:23.248 Removing: /var/run/dpdk/spdk_pid441964 00:32:23.248 Removing: /var/run/dpdk/spdk_pid445827 00:32:23.248 Removing: /var/run/dpdk/spdk_pid490188 00:32:23.248 Removing: /var/run/dpdk/spdk_pid494438 00:32:23.248 Removing: /var/run/dpdk/spdk_pid504449 00:32:23.248 Removing: /var/run/dpdk/spdk_pid509882 00:32:23.248 Removing: /var/run/dpdk/spdk_pid513852 00:32:23.248 Removing: /var/run/dpdk/spdk_pid514537 00:32:23.248 Removing: /var/run/dpdk/spdk_pid520572 00:32:23.248 Removing: /var/run/dpdk/spdk_pid526788 00:32:23.248 Removing: /var/run/dpdk/spdk_pid526791 00:32:23.248 Removing: /var/run/dpdk/spdk_pid527702 00:32:23.248 Removing: /var/run/dpdk/spdk_pid528430 00:32:23.248 Removing: /var/run/dpdk/spdk_pid529323 00:32:23.248 Removing: /var/run/dpdk/spdk_pid529966 00:32:23.248 Removing: /var/run/dpdk/spdk_pid530012 00:32:23.248 Removing: /var/run/dpdk/spdk_pid530240 00:32:23.507 Removing: /var/run/dpdk/spdk_pid530256 00:32:23.507 Removing: /var/run/dpdk/spdk_pid530305 00:32:23.507 Removing: /var/run/dpdk/spdk_pid531169 00:32:23.507 Removing: /var/run/dpdk/spdk_pid532209 00:32:23.507 Removing: /var/run/dpdk/spdk_pid533297 00:32:23.507 Removing: /var/run/dpdk/spdk_pid533989 00:32:23.507 Removing: /var/run/dpdk/spdk_pid534039 00:32:23.507 Removing: /var/run/dpdk/spdk_pid534376 00:32:23.507 Removing: /var/run/dpdk/spdk_pid535519 00:32:23.507 Removing: /var/run/dpdk/spdk_pid536668 00:32:23.507 Removing: /var/run/dpdk/spdk_pid544996 00:32:23.507 Removing: /var/run/dpdk/spdk_pid545248 00:32:23.507 Removing: /var/run/dpdk/spdk_pid549497 00:32:23.507 Removing: /var/run/dpdk/spdk_pid555374 00:32:23.507 Removing: /var/run/dpdk/spdk_pid557997 00:32:23.507 Removing: /var/run/dpdk/spdk_pid568391 00:32:23.507 Removing: /var/run/dpdk/spdk_pid577621 00:32:23.507 Removing: /var/run/dpdk/spdk_pid579625 00:32:23.507 Removing: /var/run/dpdk/spdk_pid580556 00:32:23.507 Removing: /var/run/dpdk/spdk_pid597356 00:32:23.507 Removing: /var/run/dpdk/spdk_pid601134 00:32:23.507 Removing: /var/run/dpdk/spdk_pid626766 00:32:23.507 Removing: /var/run/dpdk/spdk_pid631261 00:32:23.507 Removing: /var/run/dpdk/spdk_pid632869 00:32:23.507 Removing: /var/run/dpdk/spdk_pid634704 00:32:23.507 Removing: /var/run/dpdk/spdk_pid634944 00:32:23.507 Removing: /var/run/dpdk/spdk_pid635176 00:32:23.507 Removing: /var/run/dpdk/spdk_pid635418 00:32:23.507 Removing: /var/run/dpdk/spdk_pid635929 00:32:23.507 Removing: /var/run/dpdk/spdk_pid637793 00:32:23.507 Removing: /var/run/dpdk/spdk_pid638808 00:32:23.507 Removing: /var/run/dpdk/spdk_pid639382 00:32:23.507 Removing: /var/run/dpdk/spdk_pid641577 00:32:23.507 Removing: /var/run/dpdk/spdk_pid642300 00:32:23.507 Removing: /var/run/dpdk/spdk_pid643025 00:32:23.507 Removing: /var/run/dpdk/spdk_pid647075 00:32:23.507 Removing: /var/run/dpdk/spdk_pid657015 00:32:23.507 Removing: /var/run/dpdk/spdk_pid661565 00:32:23.507 Removing: /var/run/dpdk/spdk_pid667568 00:32:23.507 Removing: /var/run/dpdk/spdk_pid668924 00:32:23.507 Removing: /var/run/dpdk/spdk_pid670417 00:32:23.507 Removing: /var/run/dpdk/spdk_pid674889 00:32:23.507 Removing: /var/run/dpdk/spdk_pid678957 00:32:23.507 Removing: /var/run/dpdk/spdk_pid686504 00:32:23.507 Removing: /var/run/dpdk/spdk_pid686532 00:32:23.507 Removing: /var/run/dpdk/spdk_pid691039 00:32:23.507 Removing: /var/run/dpdk/spdk_pid691263 00:32:23.507 Removing: /var/run/dpdk/spdk_pid691489 00:32:23.507 Removing: /var/run/dpdk/spdk_pid691945 00:32:23.507 Removing: /var/run/dpdk/spdk_pid691950 00:32:23.507 Removing: /var/run/dpdk/spdk_pid696432 00:32:23.507 Removing: /var/run/dpdk/spdk_pid697000 00:32:23.507 Removing: /var/run/dpdk/spdk_pid701331 00:32:23.507 Removing: /var/run/dpdk/spdk_pid704107 00:32:23.507 Removing: /var/run/dpdk/spdk_pid709641 00:32:23.507 Removing: /var/run/dpdk/spdk_pid715561 00:32:23.507 Removing: /var/run/dpdk/spdk_pid724116 00:32:23.507 Removing: /var/run/dpdk/spdk_pid731354 00:32:23.507 Removing: /var/run/dpdk/spdk_pid731356 00:32:23.507 Removing: /var/run/dpdk/spdk_pid749634 00:32:23.507 Removing: /var/run/dpdk/spdk_pid750331 00:32:23.507 Removing: /var/run/dpdk/spdk_pid750915 00:32:23.507 Removing: /var/run/dpdk/spdk_pid751514 00:32:23.507 Removing: /var/run/dpdk/spdk_pid752481 00:32:23.507 Removing: /var/run/dpdk/spdk_pid753176 00:32:23.507 Removing: /var/run/dpdk/spdk_pid753686 00:32:23.767 Removing: /var/run/dpdk/spdk_pid754482 00:32:23.767 Removing: /var/run/dpdk/spdk_pid759133 00:32:23.767 Removing: /var/run/dpdk/spdk_pid759380 00:32:23.767 Removing: /var/run/dpdk/spdk_pid765425 00:32:23.767 Removing: /var/run/dpdk/spdk_pid765703 00:32:23.767 Removing: /var/run/dpdk/spdk_pid767928 00:32:23.767 Removing: /var/run/dpdk/spdk_pid775669 00:32:23.767 Removing: /var/run/dpdk/spdk_pid775674 00:32:23.767 Removing: /var/run/dpdk/spdk_pid780931 00:32:23.767 Removing: /var/run/dpdk/spdk_pid782847 00:32:23.767 Removing: /var/run/dpdk/spdk_pid784806 00:32:23.767 Removing: /var/run/dpdk/spdk_pid785910 00:32:23.767 Removing: /var/run/dpdk/spdk_pid787889 00:32:23.767 Removing: /var/run/dpdk/spdk_pid788948 00:32:23.767 Removing: /var/run/dpdk/spdk_pid797805 00:32:23.767 Removing: /var/run/dpdk/spdk_pid798287 00:32:23.767 Removing: /var/run/dpdk/spdk_pid799270 00:32:23.767 Removing: /var/run/dpdk/spdk_pid801604 00:32:23.767 Removing: /var/run/dpdk/spdk_pid802067 00:32:23.767 Removing: /var/run/dpdk/spdk_pid802531 00:32:23.767 Removing: /var/run/dpdk/spdk_pid806346 00:32:23.767 Removing: /var/run/dpdk/spdk_pid806440 00:32:23.767 Removing: /var/run/dpdk/spdk_pid808044 00:32:23.767 Removing: /var/run/dpdk/spdk_pid808426 00:32:23.767 Removing: /var/run/dpdk/spdk_pid808659 00:32:23.767 Clean 00:32:23.767 11:43:07 -- common/autotest_common.sh@1451 -- # return 0 00:32:23.767 11:43:07 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:23.767 11:43:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:23.767 11:43:07 -- common/autotest_common.sh@10 -- # set +x 00:32:23.767 11:43:07 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:23.767 11:43:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:23.767 11:43:07 -- common/autotest_common.sh@10 -- # set +x 00:32:23.767 11:43:07 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:23.767 11:43:07 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:23.767 11:43:07 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:23.767 11:43:07 -- spdk/autotest.sh@391 -- # hash lcov 00:32:23.767 11:43:07 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:23.767 11:43:07 -- spdk/autotest.sh@393 -- # hostname 00:32:23.767 11:43:07 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:24.026 geninfo: WARNING: invalid characters removed from testname! 00:32:46.024 11:43:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:46.592 11:43:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:48.499 11:43:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:50.411 11:43:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:52.318 11:43:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:54.223 11:43:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:55.605 11:43:39 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:55.865 11:43:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.865 11:43:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:55.865 11:43:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.865 11:43:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.865 11:43:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.865 11:43:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.865 11:43:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.865 11:43:39 -- paths/export.sh@5 -- $ export PATH 00:32:55.865 11:43:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.865 11:43:39 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:55.865 11:43:39 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:55.865 11:43:39 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721036619.XXXXXX 00:32:55.865 11:43:39 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721036619.YzyHsd 00:32:55.865 11:43:39 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:55.865 11:43:39 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:32:55.865 11:43:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:55.865 11:43:39 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:55.865 11:43:39 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:55.865 11:43:39 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:55.865 11:43:39 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:55.865 11:43:39 -- common/autotest_common.sh@10 -- $ set +x 00:32:55.865 11:43:39 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:55.865 11:43:39 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:55.865 11:43:39 -- pm/common@17 -- $ local monitor 00:32:55.865 11:43:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.865 11:43:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.865 11:43:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.865 11:43:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.865 11:43:39 -- pm/common@21 -- $ date +%s 00:32:55.865 11:43:39 -- pm/common@25 -- $ sleep 1 00:32:55.865 11:43:39 -- pm/common@21 -- $ date +%s 00:32:55.865 11:43:39 -- pm/common@21 -- $ date +%s 00:32:55.865 11:43:39 -- pm/common@21 -- $ date +%s 00:32:55.865 11:43:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036619 00:32:55.865 11:43:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036619 00:32:55.865 11:43:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036619 00:32:55.865 11:43:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036619 00:32:55.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036619_collect-vmstat.pm.log 00:32:55.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036619_collect-cpu-load.pm.log 00:32:55.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036619_collect-cpu-temp.pm.log 00:32:55.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036619_collect-bmc-pm.bmc.pm.log 00:32:56.806 11:43:40 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:56.806 11:43:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:56.806 11:43:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:56.806 11:43:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:56.806 11:43:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:56.806 11:43:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:56.806 11:43:40 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:56.806 11:43:40 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:56.806 11:43:40 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:56.806 11:43:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:56.806 11:43:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:56.806 11:43:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:56.806 11:43:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:56.806 11:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.806 11:43:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:56.806 11:43:40 -- pm/common@44 -- $ pid=818937 00:32:56.806 11:43:40 -- pm/common@50 -- $ kill -TERM 818937 00:32:56.806 11:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.806 11:43:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:56.806 11:43:40 -- pm/common@44 -- $ pid=818938 00:32:56.806 11:43:40 -- pm/common@50 -- $ kill -TERM 818938 00:32:56.806 11:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.806 11:43:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:56.806 11:43:40 -- pm/common@44 -- $ pid=818940 00:32:56.806 11:43:40 -- pm/common@50 -- $ kill -TERM 818940 00:32:56.806 11:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.806 11:43:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:56.806 11:43:40 -- pm/common@44 -- $ pid=818963 00:32:56.806 11:43:40 -- pm/common@50 -- $ sudo -E kill -TERM 818963 00:32:56.806 + [[ -n 312961 ]] 00:32:56.806 + sudo kill 312961 00:32:56.819 [Pipeline] } 00:32:56.836 [Pipeline] // stage 00:32:56.840 [Pipeline] } 00:32:56.857 [Pipeline] // timeout 00:32:56.861 [Pipeline] } 00:32:56.876 [Pipeline] // catchError 00:32:56.882 [Pipeline] } 00:32:56.897 [Pipeline] // wrap 00:32:56.901 [Pipeline] } 00:32:56.916 [Pipeline] // catchError 00:32:56.926 [Pipeline] stage 00:32:56.929 [Pipeline] { (Epilogue) 00:32:56.943 [Pipeline] catchError 00:32:56.945 [Pipeline] { 00:32:56.961 [Pipeline] echo 00:32:56.963 Cleanup processes 00:32:56.970 [Pipeline] sh 00:32:57.263 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:57.263 819049 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:57.263 819338 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:57.281 [Pipeline] sh 00:32:57.571 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:57.571 ++ grep -v 'sudo pgrep' 00:32:57.571 ++ awk '{print $1}' 00:32:57.571 + sudo kill -9 819049 00:32:57.588 [Pipeline] sh 00:32:57.873 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:07.914 [Pipeline] sh 00:33:08.203 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:08.203 Artifacts sizes are good 00:33:08.218 [Pipeline] archiveArtifacts 00:33:08.225 Archiving artifacts 00:33:08.379 [Pipeline] sh 00:33:08.663 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:08.677 [Pipeline] cleanWs 00:33:08.688 [WS-CLEANUP] Deleting project workspace... 00:33:08.688 [WS-CLEANUP] Deferred wipeout is used... 00:33:08.695 [WS-CLEANUP] done 00:33:08.697 [Pipeline] } 00:33:08.720 [Pipeline] // catchError 00:33:08.734 [Pipeline] sh 00:33:09.016 + logger -p user.info -t JENKINS-CI 00:33:09.025 [Pipeline] } 00:33:09.043 [Pipeline] // stage 00:33:09.049 [Pipeline] } 00:33:09.066 [Pipeline] // node 00:33:09.072 [Pipeline] End of Pipeline 00:33:09.105 Finished: SUCCESS